All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Installing or having an app is just one part of the process.  More important is onboarding data the app needs.  Has Splunk been integrated with Logbinder?  Is the data being stored where the Logbinde... See more...
Installing or having an app is just one part of the process.  More important is onboarding data the app needs.  Has Splunk been integrated with Logbinder?  Is the data being stored where the Logbinder app expects to find it? If the app is not working properly, but the data is present, then you should be able to locate what you want using the Search & Reporting app.  You will, however, need to know a little bit about the Logbinder environment, such as the name(s) of the server(s).
I need help locating the Logbinger log paths that are actively used in some of our servers. I was told I can find the list using Splunk's TA but when I click on "LogBinder" under apps, it shows blank... See more...
I need help locating the Logbinger log paths that are actively used in some of our servers. I was told I can find the list using Splunk's TA but when I click on "LogBinder" under apps, it shows blank, no data. Is there any other way to locate these paths in Splunk?  Thank you in advance!
Yes, the indexers or heavy forwarders can use the regex to discard matching events.
@richgalloway  Can we use the props and transforms to send the unwanted events to null queue aas the applied regex are not working!
We are getting error in our splunkd.log can you please help to resolve it?   11-21-2023 11:50:33.289 -0700 WARN AwsSDK [12369 ExecProcessor] - ClientConfiguration Retry Strategy will use the defa... See more...
We are getting error in our splunkd.log can you please help to resolve it?   11-21-2023 11:50:33.289 -0700 WARN AwsSDK [12369 ExecProcessor] - ClientConfiguration Retry Strategy will use the default max attempts. 11-21-2023 11:50:33.289 -0700 WARN AwsSDK [12369 ExecProcessor] - ClientConfiguration Retry Strategy will use the default max attempts. 11-21-2023 11:50:34.290 -0700 ERROR AwsSDK [12369 ExecProcessor] - CurlHttpClient Curl returned error code 28 - Timeout was reached 11-21-2023 11:50:34.291 -0700 ERROR AwsSDK [12369 ExecProcessor] - EC2MetadataClient Http request to retrieve credentials failed 11-21-2023 11:50:34.291 -0700 WARN AwsSDK [12369 ExecProcessor] - EC2MetadataClient Request failed, now waiting 0 ms before attempting again. 11-21-2023 11:50:35.292 -0700 ERROR AwsSDK [12369 ExecProcessor] - CurlHttpClient Curl returned error code 28 - Timeout was reached 11-21-2023 11:50:35.292 -0700 ERROR AwsSDK [12369 ExecProcessor] - EC2MetadataClient Http request to retrieve credentials failed 11-21-2023 11:50:35.292 -0700 ERROR AwsSDK [12369 ExecProcessor] - EC2MetadataClient Can not retrive resource from http://169.254.169.254/latest/meta-data/placement/availability-zone      Version: Splunk Universal Forwarder 9.0.6 (build 050c9bca8588)
@richgalloway Thanks, it worked, and I appreciate it.
There is no documented way to do that.  Splunk recommends engaging Professional Services for that situation.  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Migratenon-clusteredindexe... See more...
There is no documented way to do that.  Splunk recommends engaging Professional Services for that situation.  See https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Migratenon-clusteredindexerstoaclusteredenvironment#Is_there_any_way_to_migrate_my_legacy_data.3F It's not as simple as copying data from one indexer to another because care must be taken to ensure bucket IDs are not duplicated.
Hey Rick, thanks for responding! I saw that page, but unfortunately it doesn't specifically mention costs or data transfer limitations...  say they're restoring data daily (edge case I know), but do... See more...
Hey Rick, thanks for responding! I saw that page, but unfortunately it doesn't specifically mention costs or data transfer limitations...  say they're restoring data daily (edge case I know), but do they only EVER pay for the 500GB block or will they be surprised by transfer costs if they utilize the feature too much? *MY* answer is "data transfer costs are likely built into the cost model"  but they want a specific answer.
Hi @Ajith.Kumar, Are you familiar with End User Monitoring? https://docs.appdynamics.com/appd/21.x/21.5/en/end-user-monitoring
Try below. It will start the bin from Saturday. | bin span=1w@w6 _time   For Monday it will be | bin span=1w@w1 _time  
Thanks! I initially tried that call and  wasn't working for me but I ended up realizing it was because I was adding my fields to the "Custom fields" and not the "CEF" settings. After speaking with t... See more...
Thanks! I initially tried that call and  wasn't working for me but I ended up realizing it was because I was adding my fields to the "Custom fields" and not the "CEF" settings. After speaking with the Splunk team it sounds like Custom fields are meant for reference within SOAR, like adding some information into the HUD, whereas CEF is what's actually used to access the artifact data. I appreciate your reply
Try this: phantom.collect2(container=container, datapath=["artifact:*.cef.FIELD_NAME"])
Hi @Zoltan.Gutleber, Thanks so much for following up with the solution. We really like to see members sharing new discoveries and insights with the community!
It doesn't work - it complains about  indentation
Hi @Jack90, sorry I didn't realize you were talking about Splunk Cloud! Forget Indexers! Ciao. Giuseppe
I am getting below error from Splunkd. How to fix this root cause error. Please suggest some workaround.    
Thank you so much for your answer. Could you kindly please precise what do you mean by setting roles at indexers at Splunk Cloud?
Hi @Viveklearner , please see my approach and adapt it to your data <your_search> | eval Status=case(status>=200 AND status<400,"Success",status>=400 AND status<500,"Exception",status>=500,"Failure... See more...
Hi @Viveklearner , please see my approach and adapt it to your data <your_search> | eval Status=case(status>=200 AND status<400,"Success",status>=400 AND status<500,"Exception",status>=500,"Failure",status) | stats count BY Status Ciao. Giuseppe
We have range of statua from 200 to 600. Want to search logs and create a output in below sample for range as 200 to 400 as success, 401 to 500 as exception, 501 to 500 as failure: Sucess - 100 E... See more...
We have range of statua from 200 to 600. Want to search logs and create a output in below sample for range as 200 to 400 as success, 401 to 500 as exception, 501 to 500 as failure: Sucess - 100 Exceptio - 44 Failure - 3 I am able to get above format data but getting duplicate rows for each category e.g. Success - 10 Success - 40 Sucess - 50 Exception - 20 Exception - 24 Failure - 1 Failure -2 Query  Ns=abc app_name= xyz | stats count by status | eval status=if(status>=200 and status<400,"Success",status) | eval status=if(status>=400 and status<500,"Exception",status) | eval status=if(status>=500,"Failure",status) Kindly help.
Hi @krutika_ag , if these splunk servers are sending internal logs to Splunk you could use something like this: for Windows servers: index=_internal | rex field=source "^(?<splunk_home>.*)Splunk" ... See more...
Hi @krutika_ag , if these splunk servers are sending internal logs to Splunk you could use something like this: for Windows servers: index=_internal | rex field=source "^(?<splunk_home>.*)Splunk" | dedup host | table host splunk_home for linux servers: index=_internal | rex field=source "^(?<splunk_home>.*)splunk" | dedup host | table host splunk_home Ciao. Giuseppe