All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The table pasted has been reformated. I have attached the image of the format that I need. Please check the attached image
Hi, I want to create a table in the below format and provide the count for them. I have multiple fields in my index and I want to create a table(similar to a excel pivot) using three fields App ... See more...
Hi, I want to create a table in the below format and provide the count for them. I have multiple fields in my index and I want to create a table(similar to a excel pivot) using three fields App Name, Response code and Method  index=abcd  | chart count  over App Name by Response code  --> Above works for me but I can create a table only using 2 fields.  How to create a table something as below format  with 3 fields or more than 3. Please could you help.  APP NAME RESPONSECODE RESPONSECODE RESPONSECODE 200 400 400 GET POST PATCH GET POST PATCH GET POST PATCH APP1                   APP2                   APP3                   APP4                   APP5                   APP6                  
Hi, How can I determine the index responsible for the majority of Splunk license consumption when analyzing security data in ES ?
Do you know any other way to do it. I am trying to do it in 2023 and the provided link is not available anymore. Does anyone has another alternatives to do this please? Thank you
I want to calculate the error count from the logs . But the error are of two times which can be distinguish only from the flow end event. i.e [ flow ended put :sync\C2V] So what condition I can ... See more...
I want to calculate the error count from the logs . But the error are of two times which can be distinguish only from the flow end event. i.e [ flow ended put :sync\C2V] So what condition I can put so that I can get this information from the above given log.    index=us_whcrm source=MuleUSAppLogs sourcetype= "bmw-crm-wh-xl-retail-amer-prd-api" ((severity=ERROR "Transatcion") OR (severity=INFO "Received Payload")) I am using this query to get below logs. Now I want a condition that when it is severity=error then I can get the severity= info event of received payload to get the details of the correlationId and also end flow event so that I can determine the error type.    
Any idea on how to configure for total calls per 1 Hour's & total calls per 24 Hours App-D metrics. Please help me here.
Any idea on how to configure for total calls per 1 hour's metrics. Please help me here.
Hello to everyone! I have an UF installed on a MS file server Our Unified Communications Manager sends CDR and CMR files to this file server via SFTP Often enough, I see error messages, as you see... See more...
Hello to everyone! I have an UF installed on a MS file server Our Unified Communications Manager sends CDR and CMR files to this file server via SFTP Often enough, I see error messages, as you see in the screenshot (UF cannot read the file) The most strange thing is that all information from such files is successfully read What is wrong with my UF settings? Or maybe this is not UF? props.conf [ucm_file_cdr] SHOULD_LINEMERGE = False INDEXED_EXTRACTIONS = csv TIMESTAMP_FIELDS = dateTimeOrigination BREAK_ONLY_BEFORE_DATE = False MAX_TIMESTAMP_LOOKAHEAD = 60 initCrcLength = 1500 ANNOTATE_PUNCT = false TRANSFORMS-no_column_headers = no_column_headers [ucm_file_cmr] SHOULD_LINEMERGE = False INDEXED_EXTRACTIONS=csv TIMESTAMP_FIELDS = dateTimeOrigination BREAK_ONLY_BEFORE_DATE = False MAX_TIMESTAMP_LOOKAHEAD = 13 initCrcLength = 1000 ANNOTATE_PUNCT = false TRANSFORMS-no_column_headers = no_column_headers   transforms.conf [no_column_headers] REGEX = ^INTEGER\,INTEGER\,INTEGER.*$ DEST_KEY = queue FORMAT = nullQueue  
Hi @cedSplunk2023, I'was speaking of the system to identify vulnerabilities, not the target systems: are you using Nessus, Tenable or Qualys or which solution? How did you indexed logs, which Add- ... See more...
Hi @cedSplunk2023, I'was speaking of the system to identify vulnerabilities, not the target systems: are you using Nessus, Tenable or Qualys or which solution? How did you indexed logs, which Add- On did you use? Did you already indexed logs? Ciao. Giuseppe
I have a Splunk alert where I specify the fields using "| fields ErrorType host UserAgent Country IP_Addr" and I want to receive this column order in SOAR platform. When I look at the JSON results an... See more...
I have a Splunk alert where I specify the fields using "| fields ErrorType host UserAgent Country IP_Addr" and I want to receive this column order in SOAR platform. When I look at the JSON results and UI from SOAR, the column order has changed to host, Country, IP_Addr, ErrorType and UserAgent (not the expected results).  I think this has to do with the REST call and Json data, but I would like to check if there is any quick fix we could do from splunk or SOAR side to show the proper order of columns.  Any help on this will be much appreciated. 
Issue was 'sort' limited to 10,000 rows so replacing with 'sort 0' and I see what I need to see [no missing jobs]
Hi All Issue finally resolved.  This is what I was told - 'sort' command limit is 10,000 and issue was the rather large number of log being returned and the resolution was to replace 'sort' with 'so... See more...
Hi All Issue finally resolved.  This is what I was told - 'sort' command limit is 10,000 and issue was the rather large number of log being returned and the resolution was to replace 'sort' with 'sort 0' which returns all logs [and now I see everything I need to]
Thanks this helped alot, be aware you need to place the sslPassword = into the right stanza.  
Check your default app/dashboard.  I think you can get that error if the permissions changed on the app or dashboard and you no longer have access.
I don't understand it, either, but Splunk engineering has confirmed only Splunk Web is affected.  That would exclude all Universal Forwarders.
Thank you. In splunkd log there are so many errors. I'm wondering that Splunk is working. But this is not related to the newest update. I see that these errors were also logged in the previous releas... See more...
Thank you. In splunkd log there are so many errors. I'm wondering that Splunk is working. But this is not related to the newest update. I see that these errors were also logged in the previous release. But I don't find errors related to this issue. With index=_internal sourcetype=splunkd source=*/splunkd.log ERROR OR WARN I also don't find any errors or warnings.
Hi 1. Best way to ensure that splunk is not writing that directory is stop splunk for that unmount + mount time. 2. In 1t phase try to write something as splunk user to that new mount point. Then a... See more...
Hi 1. Best way to ensure that splunk is not writing that directory is stop splunk for that unmount + mount time. 2. In 1t phase try to write something as splunk user to that new mount point. Then after splunk is up and running check from internal logs that splunk could frozen bucket to this mount point. r. Ismo BTW: NFS => Not For Splunk. There are big possibility that using NFS will generate issues to your environment especially if you are using cluster and unless your NFS service is not enough stable.
I have a question about security advisory SVD-2023-0805. It states only Splunk Web is affected, but the description clearly mentions the issue is caused by how OpenSSL is built, which is a very gener... See more...
I have a question about security advisory SVD-2023-0805. It states only Splunk Web is affected, but the description clearly mentions the issue is caused by how OpenSSL is built, which is a very generic library. For this reason I would like to check if indeed only Splunk Web is affected, or that Splunk installations on Windows in general are affected. I can imagine that OpenSSL is also used when a SSL/TLS connection is made from a forwarder to an indexer. This leads to the question: are universal forwarders on Windows also affected by this security advisory, even when Splunk Web is disabled?
Anything what could related to that error/warning. From _internal logs you could try e.g. index=_internal sourcetype=splunkd source=*/splunkd.log ERROR OR WARN select suitable time for search e.g.... See more...
Anything what could related to that error/warning. From _internal logs you could try e.g. index=_internal sourcetype=splunkd source=*/splunkd.log ERROR OR WARN select suitable time for search e.g. when you have started splunk. 
Hi @trashyroadz   Our push mode is merge_to_default.  All configurations are in $SPLUNK_HOME/etc/system/local/app.conf  in the Deployer.  In the search page, getting error as "ReplicationStatus: ... See more...
Hi @trashyroadz   Our push mode is merge_to_default.  All configurations are in $SPLUNK_HOME/etc/system/local/app.conf  in the Deployer.  In the search page, getting error as "ReplicationStatus: Failed-Failure info: failed_because_BUNDLE_SIZE_RETRIEVAL_FAILURE" In the message box got message saying bundle size exceeds limit. On checking, could see all apps in $SPLUNK_HOME/var/run/splunk/deploy even if we had changed a single file. Please help on this. Also want to know if any document is there to know the impact of increasing the maxBundleSize on system resources and performance (Splunk/System). If we are increasing, then how much infra needs to be increased in case it is needed.