All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thanks
@Alan_Chan  Is your ES IP added to your SOAR allow list? Check connection before pairing - Confirm that Enterprise Security can initiate a TCP connection for REST calls to the SOAR port Also error... See more...
@Alan_Chan  Is your ES IP added to your SOAR allow list? Check connection before pairing - Confirm that Enterprise Security can initiate a TCP connection for REST calls to the SOAR port Also error highlights SSL handshake failure-Are you using self signed or valid CA certificate in the SOAR? Also note that, Splunk Enterprise Security requires a valid SSL certificate to communicate with Splunk SOAR Ref:#https://help.splunk.com/en/splunk-enterprise-security-8/administer/8.0/configuration-and-settings/pair-splunk-enterprise-security-with-splunk-soar   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
solved.  
@uagraw01 how you change the port ? i mean when i reconf in /splunk/etc/system/local/server.conf with port = 8192 under stanza [kvstore] then the kvstore cant be enable anymore (status = failed) 
I Install UF 9.1.7 ARM file on Rocky 9. and i got an Error "tcp_conn_open_afux ossocket_connect failed with No such file or directory" when set deploy-poll   is this Compatibility problem?
Splunk ES using 8000 port while SOAR using 8443 port
I am using the Java SignalFlow client to send the same query each minute.  Only the start and end times change.  I actually set the start and end time to the same value, which seems to reliably give ... See more...
I am using the Java SignalFlow client to send the same query each minute.  Only the start and end times change.  I actually set the start and end time to the same value, which seems to reliably give me a single data point, which is what I want. "persistent" is false and "immediate" is true. I'm reusing the SignalFlowClient object but closing the computation after reading the results. If I run the client in a loop with a 60 second delay between iterations, I get frequent but unpredictable http 400 bad request responses.  It appears the first request always succeeds.  There is no further info about what's bad.  Output looks like this: com.signalfx.signalflow.client.SignalFlowException: 400: failed post [ POST https://stream.us0.signalfx.com:443/v2/signalflow/execute?start=1750889822602&stop=1750889822602&persistent=false&immediate=true&timezone=America%2FChicago HTTP/1.1 ] reason: Bad Request at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportConnection.post(ServerSentEventsTransport.java:338) at com.signalfx.signalflow.client.ServerSentEventsTransport.execute(ServerSentEventsTransport.java:106) at com.signalfx.signalflow.client.Computation.execute(Computation.java:185) at com.signalfx.signalflow.client.Computation.<init>(Computation.java:67) at com.signalfx.signalflow.client.SignalFlowClient.execute(SignalFlowClient.java:145)  How can I troubleshoot this further?  I can't find much useful info about how the client is supposed to work. thanks  
The general idea is OK but there are details which can pop up unexpectedly here and there. 1. I assume (never used it myself) that Amazon Linux is also an RPM-based distro and you'll be installing S... See more...
The general idea is OK but there are details which can pop up unexpectedly here and there. 1. I assume (never used it myself) that Amazon Linux is also an RPM-based distro and you'll be installing Splunk the same way it was installed before. 2. Remember to shut down Splunk service before moving the data. And of course don't start the new instance before you copy the data. 3. I'm not sure why you want to snapshot the volumes. For backup in case you need to roll back? 4. You might have other dependencies lying around, not included in $SPLUNK_HOME - for example certificates. 5. If you move whole filesystems between server instances the UIDs and GIDs might not match and you might need to fix your accesses. Oh, and most importantly - I didn't notice that at first - DON'T UPGRADE AND MOVE AT THE SAME TIME! Either upgrade and then do the move to the same version on a new server or move to the same 8.x you have now and then upgrade on the new server.
Or...if you don't want to do a props you can use an eval statement inline: | eval Source=mvindex('AccountName',0) | eval Destination=mvindex('AccountName',1) Replace Source/Destination with whatev... See more...
Or...if you don't want to do a props you can use an eval statement inline: | eval Source=mvindex('AccountName',0) | eval Destination=mvindex('AccountName',1) Replace Source/Destination with whatever you want the new name to be and AccountName with whatever the original names are. Also, if you have the proper TA (Splunk Add-on for Microsoft Windows | Splunkbase) installed in all the places it should be, these evaluations (via props) should already be happening as src_* and dest_* as per CIM normalization (Overview of the Splunk Common Information Model | Splunk Docs)
You should not modify the original raw event, just create a props to do searchtime representational changes to the data.   The only time you should modify the raw data is if you are masking it to pr... See more...
You should not modify the original raw event, just create a props to do searchtime representational changes to the data.   The only time you should modify the raw data is if you are masking it to protect sensitive information such as credit card or identification numbers like social security here in the US. Modifying raw log data before sending it to Splunk destroys forensic integrity, making it impossible to validate original events during investigations. It also breaks source consistency, impacting troubleshooting, compliance, and accurate analytics.
LOL here's ChatGPT, looks like PowerAutomate does have an HTTP post...IDK if it can read a full file, but it probably can   
I haven't used PowerAutomate...so I don't know if this is possible...but could PowerAutomate create an HTTP post event and send the data to a Splunk HEC endpoint?
Hi All: Thank you and I appreciate your response. We have a standalone instance of Splunk indexer and I double-checked and for the most part we're using 8.2.9 version of SplunkUF. Additionally, si... See more...
Hi All: Thank you and I appreciate your response. We have a standalone instance of Splunk indexer and I double-checked and for the most part we're using 8.2.9 version of SplunkUF. Additionally, since Splunk Enterprise 9.2.7 is the version in the 9.x.x family that supports Amazon Linux, we'll go for the same version. Currently the indexes' physical location is spread across volumes that are mounted on the Splunk indexer host at OS level. Please let us know if this is the right approach and if there are any stages we're missing in regards to the data cutover from the old to the new server. • Install Splunk Enterprise 9.2.7 on a new AL2023 server • Take a snapshot of the old server's volumes that contains indexed data, then connect them to the new one using the same mount point. • Copy the entire $SPLUNK_HOME/etc directory from the old server to the new server • Copy indexed data from $SPLUNK_DB (/opt/splunk/var/lib/splunk) to the new server • Detach & attach publicIP/EIP from old to the new server
Firstly, a golden shovel. This is a very very old thread. Secondly, you are mistaken. While the event will not get "immediately deleted" but for a completely different reason. There are several fact... See more...
Firstly, a golden shovel. This is a very very old thread. Secondly, you are mistaken. While the event will not get "immediately deleted" but for a completely different reason. There are several factors here: - events are not handled on their own but by buckets - hot buckets do not roll to frozen directly - "unusual" events (too far in the past or "from the future") are indexed in quarantine buckets which might get rolled completely differently than your normal buckets.
Thanks. Believe I got it. What tripped me up, is I didn't realize latest could be used for non-time based fields.   
I don't mean SharePoint activity, admin or audit logs. I mean actual data files (that will be converted later to lookup files in Splunk Cloud). Basically, do I need to extract the CSV files from Sha... See more...
I don't mean SharePoint activity, admin or audit logs. I mean actual data files (that will be converted later to lookup files in Splunk Cloud). Basically, do I need to extract the CSV files from SharePoint first (eg to a traditional on-prem file share by way of Power Automate) and use a UF to forward the files to Splunk Cloud, or is there some other nifty way to forward CSV data files directly from SharePoint Online to Splunk Cloud, or some other intermediary method? Thank you.
Data retention is not based on _time, its actually based on _indextime and max size set for example, if I index below sample data now, 2020-03-02 12:23:23 blah blah Retention time: 6months Maxsi... See more...
Data retention is not based on _time, its actually based on _indextime and max size set for example, if I index below sample data now, 2020-03-02 12:23:23 blah blah Retention time: 6months Maxsize: 100GB then the _time of the event will be 2020-03-02 12:23:23 but _indextime will be 2025-06-25 HH:MM:SS so this data will not get deleted immediately since _time of this event is 5 years old.  
Thank you @VatsalJagani !! This is so much helpful. A lot of examples to do exactly what I wanted. Works like a charm now.
 @LAME-Creations   For RBAC purposes, you can just make your summary index reside on the same index that it was created for. --> if I do in this way then will it override my original index data? And... See more...
 @LAME-Creations   For RBAC purposes, you can just make your summary index reside on the same index that it was created for. --> if I do in this way then will it override my original index data? And how can I differentiate between both index and summary logs present in same index? We use source field to get application name in normal index.  In my case user want to see raw data as well and we need all fields to be viewed everytime. Will summary index provides raw data as well? We have index format in this way - waf_app_<appID> app ID is different for different app teams. And in dashboard I have just given index = waf_app_* in base search. What index can I give now in summary index (same as my original index)  I am very confused here...
@livehybrid Thank you very much for this solution however my needs need to have it as streamlined as possible. While this does give the essential functionality that I need, it is a solution I need t... See more...
@livehybrid Thank you very much for this solution however my needs need to have it as streamlined as possible. While this does give the essential functionality that I need, it is a solution I need to work off of additionally. My needs specifically are that it is all click based and you don't have to change anything anywhere else and that it can all be handled within the timechart/bar chart. Anything else is to be fully hidden away. Much like the search app counterpart I also needed the timechart to the update itself based on the time range the rest of the visualization would use. Ideally this would just zoom in on one bar. Unfortunately what I am trying to and what I am trying to make altogether can't have the Text box and specific manual inputs. Simply put, this does accomplish the goal of setting time earliest and latest values that are 2h apart and then the other visualizations can take those tokens as their time. But I need a different approach to getting there.