All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to calculate the error count from the logs . But the error are of two times which can be distinguish only from the flow end event. i.e [ flow ended put :sync\C2V] So what condition I can ... See more...
I want to calculate the error count from the logs . But the error are of two times which can be distinguish only from the flow end event. i.e [ flow ended put :sync\C2V] So what condition I can put so that I can get this information from the above given log.    index=us_whcrm source=MuleUSAppLogs sourcetype= "bmw-crm-wh-xl-retail-amer-prd-api" ((severity=ERROR "Transatcion") OR (severity=INFO "Received Payload")) I am using this query to get below logs. Now I want a condition that when it is severity=error then I can get the severity= info event of received payload to get the details of the correlationId and also end flow event so that I can determine the error type.    
Any idea on how to configure for total calls per 1 Hour's & total calls per 24 Hours App-D metrics. Please help me here.
Hello to everyone! I have an UF installed on a MS file server Our Unified Communications Manager sends CDR and CMR files to this file server via SFTP Often enough, I see error messages, as you see... See more...
Hello to everyone! I have an UF installed on a MS file server Our Unified Communications Manager sends CDR and CMR files to this file server via SFTP Often enough, I see error messages, as you see in the screenshot (UF cannot read the file) The most strange thing is that all information from such files is successfully read What is wrong with my UF settings? Or maybe this is not UF? props.conf [ucm_file_cdr] SHOULD_LINEMERGE = False INDEXED_EXTRACTIONS = csv TIMESTAMP_FIELDS = dateTimeOrigination BREAK_ONLY_BEFORE_DATE = False MAX_TIMESTAMP_LOOKAHEAD = 60 initCrcLength = 1500 ANNOTATE_PUNCT = false TRANSFORMS-no_column_headers = no_column_headers [ucm_file_cmr] SHOULD_LINEMERGE = False INDEXED_EXTRACTIONS=csv TIMESTAMP_FIELDS = dateTimeOrigination BREAK_ONLY_BEFORE_DATE = False MAX_TIMESTAMP_LOOKAHEAD = 13 initCrcLength = 1000 ANNOTATE_PUNCT = false TRANSFORMS-no_column_headers = no_column_headers   transforms.conf [no_column_headers] REGEX = ^INTEGER\,INTEGER\,INTEGER.*$ DEST_KEY = queue FORMAT = nullQueue  
I have a Splunk alert where I specify the fields using "| fields ErrorType host UserAgent Country IP_Addr" and I want to receive this column order in SOAR platform. When I look at the JSON results an... See more...
I have a Splunk alert where I specify the fields using "| fields ErrorType host UserAgent Country IP_Addr" and I want to receive this column order in SOAR platform. When I look at the JSON results and UI from SOAR, the column order has changed to host, Country, IP_Addr, ErrorType and UserAgent (not the expected results).  I think this has to do with the REST call and Json data, but I would like to check if there is any quick fix we could do from splunk or SOAR side to show the proper order of columns.  Any help on this will be much appreciated. 
I have a question about security advisory SVD-2023-0805. It states only Splunk Web is affected, but the description clearly mentions the issue is caused by how OpenSSL is built, which is a very gener... See more...
I have a question about security advisory SVD-2023-0805. It states only Splunk Web is affected, but the description clearly mentions the issue is caused by how OpenSSL is built, which is a very generic library. For this reason I would like to check if indeed only Splunk Web is affected, or that Splunk installations on Windows in general are affected. I can imagine that OpenSSL is also used when a SSL/TLS connection is made from a forwarder to an indexer. This leads to the question: are universal forwarders on Windows also affected by this security advisory, even when Splunk Web is disabled?
Hello, Currently we have an NFS drive which is mounted on /opt/archive directory Splunk indexer installation is in Red hat We plan to change the remote storage IP address Current entry in /... See more...
Hello, Currently we have an NFS drive which is mounted on /opt/archive directory Splunk indexer installation is in Red hat We plan to change the remote storage IP address Current entry in /etc/fstab 192.168.24.1:/opt      /opt/archive     nfs    vers=4,rw,intr,nosuid  0  0 1. Before un-mounting is it required to stop rolling of cold buckets to frozen? how to stop this roll? 2. After mounting the new remote drive for frozen buckets Is there a way to verify that frozen directory is receiving from cold
Hello, I've been working with AppDynamics for some time now, and I'm looking to enhance our monitoring and analytics capabilities by integrating it with Splunk. I believe this integration can offer ... See more...
Hello, I've been working with AppDynamics for some time now, and I'm looking to enhance our monitoring and analytics capabilities by integrating it with Splunk. I believe this integration can offer a wealth of insights. Has anyone here successfully integrated AppDynamics with Splunk? I'm particularly interested in hearing about any best practices, challenges you've encountered, and the impact it has had on your application monitoring and troubleshooting efforts. Additionally, if anyone has pursued the Splunk Certification or is familiar with the certification process, could you share your experiences and any specific aspects of Splunk that you found especially relevant in the context of AppDynamics integration? I also Check this https://splunkbase.splunk.com/app/4315#:~:text=StreamWeaver%20makes%20integrating%20your%20AppDynamics,end%20observability%20and%20AIOps%20goals. Thanks in advance!
I have a alert which is running to find few values and i need to write the result of the alert to new index which has created. I have used alert action as log event and mentioned the new index whic... See more...
I have a alert which is running to find few values and i need to write the result of the alert to new index which has created. I have used alert action as log event and mentioned the new index which has created to write the output of the alert. but the output is getting ingested to new index but when i tried with main (default index) the output of the alert is getting ingested. the newly created index is working, i tried checking ingesting other data manually with files. so, what could be the issue , the alert results are not getting ingested to newly created index.
Hi , I am tryign to login to Search head server. It gives me error  500 Internal Server Error Oops. The server encountered an unexpected condition which prevented it from fulfilling the request.... See more...
Hi , I am tryign to login to Search head server. It gives me error  500 Internal Server Error Oops. The server encountered an unexpected condition which prevented it from fulfilling the request. Click here to return to Splunk homepage.   If i put wrong password it gives wrong password error. so looks like this is not related to authentication.
Network - vulnerabilities detected on switches not resolved over a month
Configured Field is not showing in interesting field. Getting ;;;;;;;;;;;;; value after searching with index="Index Name " sourcetype=*  
HI team,   I need to extract the new fields by using rex for below raw data  1.ResponseCode 2.url message: INFO [nio-8443-exce-8] b. b. b.filter.loggingvontextfilter c.c.c.c.l.cc.f.loggingc... See more...
HI team,   I need to extract the new fields by using rex for below raw data  1.ResponseCode 2.url message: INFO [nio-8443-exce-8] b. b. b.filter.loggingvontextfilter c.c.c.c.l.cc.f.loggingcintextfil=ter.post process(Loggingcintextfilter.java"201)-PUT/actatarr/halt/liveness||||||||||||METRIC|--|Responsecode=400|Response Time=0
Hi I am new to splunk. I set up a single-site cluster to parse a JSON-formatted log. I use cm in the path of /opt/splunk/etc/manager-apps/_cluster/local. conf and transforms.conf configuration files... See more...
Hi I am new to splunk. I set up a single-site cluster to parse a JSON-formatted log. I use cm in the path of /opt/splunk/etc/manager-apps/_cluster/local. conf and transforms.conf configuration files were sent to index in the path /opt/splunk/etc/peer-apps/_cluster/local. However, when I searched in the search header, the desired effect was not found. props.conf [itsd] DATETIME_CONFIG = CURRENT KV_MODE = json LINE_BREAKER = ([\r\n]+) category = Structured disabled = false pulldown_type = true TRANSFORMS-null1 = replace_null TRANSFORMS-null2 = replace_null1   transforms.conf [replace_null] REGEX = ^\[ DEST_KEY=queue FORMAT=nullQueue [replace_null1] REGEX=(.*)(\}\s?\}) DEST_KEY=_raw FORMAT=$1$2
I would like a search query that would display a graph with the number of closed notables divided by urgency in the last 12 hours, but the notables need to be retrieved based on the time they were cl... See more...
I would like a search query that would display a graph with the number of closed notables divided by urgency in the last 12 hours, but the notables need to be retrieved based on the time they were closed. I'm using this search: | inputlookup append=T incident_review_lookup | rename user as reviewer | `get_realname(owner)` | `get_realname(reviewer)` | eval nullstatus=if(isnull(status),"true","false") | `get_reviewstatuses` | eval status=if((isnull(status) OR isnull(status_label)) AND nullstatus=="false",0,status) | eval status_label=if(isnull(status_label) AND nullstatus=="false","Unassigned",status_label) | eval status_description=if(isnull(status_description) AND nullstatus=="false","unknown",status_description) | eval _time=time | `uitime(time)` | fields - nullstatus     What's wrong?
I choose source from forwarded input selection to input in splunk. I can't see sysmon in logs from source. I made the inputs.conf setting via forwarder, unfortunately I couldn't see it again. I have ... See more...
I choose source from forwarded input selection to input in splunk. I can't see sysmon in logs from source. I made the inputs.conf setting via forwarder, unfortunately I couldn't see it again. I have logs. There are forwarders. My other logs are coming. The sysmon log is not coming. I would appreciate your help.   not sysmon log   
Hello, I have been trying to integrate Nessus Essentials with SOAR since days but with failure till now, I installed Nessus App in SOAR, and configured the new asset with the APIs from Nessus Essen... See more...
Hello, I have been trying to integrate Nessus Essentials with SOAR since days but with failure till now, I installed Nessus App in SOAR, and configured the new asset with the APIs from Nessus Essentials and the Nessus IP address and port  Nessus server IP/hostname : https://192.168.199.78    I tried http and without it Port that the Nessus server is listening on 8834 when i test connectivity i get : 1 action failed Error Connecting to server. Details: HTTPSConnectionPool(host='https', port=443): Max retries exceeded with url: //192.168.199.78:8834/users (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fc8b0364940>: Failed to establish a new connection: [Errno -2] Name or service not known'))   I searched the community and other sources but didnt find any thing that can help please, any body can help me?   many thanks  
I have an index A and another index B. logs in A have a correlation to logs in B. But the only common field between them is 'timestamp'. There is a field 'fa' in index A and field 'fb' in index B. t... See more...
I have an index A and another index B. logs in A have a correlation to logs in B. But the only common field between them is 'timestamp'. There is a field 'fa' in index A and field 'fb' in index B. timestamp in index A logs has a +5 minutes drift with index B. Now I want to write a query to match field 'fa' in index A and find corresponding log based on timestamp (with +5 minutes drift) on index B and get me field 'fb' in index B.
When I create report and enable summary index, the results are getting in the below format.   Table: id    _time 1      2022-06-01 12:01:30.802 1      2022-06-01 12:11:47.069   But when... See more...
When I create report and enable summary index, the results are getting in the below format.   Table: id    _time 1      2022-06-01 12:01:30.802 1      2022-06-01 12:11:47.069   But when I call this summary index using spl query, milliseconds are missing in _time column.   Query I have used, index="summary" report="yy" |eventstats max(search_now) as latestsearch by id, report |where search_now = latestsearch   This query is to fetch latest run result
I am new to Splunk so I'm learning and I know that it can do quite a bit.  I am searching for similar network traffic for users based on our proxy indexes.  I want to know if there is a particular si... See more...
I am new to Splunk so I'm learning and I know that it can do quite a bit.  I am searching for similar network traffic for users based on our proxy indexes.  I want to know if there is a particular site visited by all of the users in our list of 50 or so.  so user and url are necessary.  I need to pull it from all of their data in our network proxy though.  here is a redacted portion of a search I have honed down to but feel free to suggest something better. Edit to provide a clear question:  The below search doesn't work, can you provide a different search or edits that would assist me in getting the data I'm looking for? index=<network one> <userID> IN (userID1,userID2) AND url=* | stats dc(userID) as count by url | where count=2
My client wants to know if users do not connect in 90 days they can be blocked