All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for the details, this will help me with current dashboard
If you mean Splunk Enterprise trial, you can configure TLS on any component. If you mean Splunk Cloud trial - no. The inputs are not encrypted and the webui uses self-signed certs as far as I remember.
Thank you for your response. Since regex cannot be used in lookups and now we defining everything within correlation searches which can be cumbersome for updates, Is there any alternative solutions? ... See more...
Thank you for your response. Since regex cannot be used in lookups and now we defining everything within correlation searches which can be cumbersome for updates, Is there any alternative solutions? Are there more efficient ways to detect suspicious command execution without relying solely on correlation searches? Your guidance on streamlining this process would be greatly appreciated.
The overall idea is ok but if you want to check if something happens _after_ an interesting event you must reverse the original data stream because you cannot streamstats backwards. But the example d... See more...
The overall idea is ok but if you want to check if something happens _after_ an interesting event you must reverse the original data stream because you cannot streamstats backwards. But the example data was in chronological order while the defaul result sorting is opposite. So it's all a bit confusing.
Hello @sainag_splunk, Thanks for sharing the information. Yes, I am currently using Splunk Cloud Trial version for my POC work. Thanks.
@Splunk_Fabi Hello, which version of ES are you using? I have seen a similar bug in 7.3.2 (a fix might be on the future roadmap). If you are on 7.3.2, please file a ticket with Splunk Support to expe... See more...
@Splunk_Fabi Hello, which version of ES are you using? I have seen a similar bug in 7.3.2 (a fix might be on the future roadmap). If you are on 7.3.2, please file a ticket with Splunk Support to expedite the issue.       If this Helps, Please Upvote.    
@rahusri2 This could be a bug in the REST endpoint on the backend '/servicesNS/admin/search/server/info' for that version/build. Are you using Splunk Cloud Trial? It's recommended to reach out to Spl... See more...
@rahusri2 This could be a bug in the REST endpoint on the backend '/servicesNS/admin/search/server/info' for that version/build. Are you using Splunk Cloud Trial? It's recommended to reach out to Splunk Support if this is happening in your production environment. If this helps, Please Upvote.  
@s_s Hello, checkout the queues on the hwf pipleine, and also see if you can apply  async forwarding. https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-raw... See more...
@s_s Hello, checkout the queues on the hwf pipleine, and also see if you can apply  async forwarding. https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat If this Helps, Please Upvote.
@fatsug Hello! The last_validated_bundle  differs from the active_bundlewhich identifies the bundle that was most recently applied and is currently active across the peer nodes. Refer: https:/... See more...
@fatsug Hello! The last_validated_bundle  differs from the active_bundlewhich identifies the bundle that was most recently applied and is currently active across the peer nodes. Refer: https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Updatepeerconfigurations#Use_the_CLI_to_validate_the_bundle_and_check_restart         if this Helps,  Please Upvote.
@kwangwon Splunkcloud  trial version is a standalone system and uses self-signed certs.  You can try using curl -k "https://ilove.splunkcloud.com:8088/services/collector"         If this reply ... See more...
@kwangwon Splunkcloud  trial version is a standalone system and uses self-signed certs.  You can try using curl -k "https://ilove.splunkcloud.com:8088/services/collector"         If this reply helps, Please Upvote.        
@SteveBowser  Checkout inputs.conf $decideOnStartup server.conf  hostnameOption = [ fullyqualifiedname | clustername | shortname ] If this reply helps, Please Upvote.
To build on what @MuS says, here's a simple example that simulates two data sets, the switch data (index A) and the devices data (index B) and the stats command shows how to "join" on the two. So, e... See more...
To build on what @MuS says, here's a simple example that simulates two data sets, the switch data (index A) and the devices data (index B) and the stats command shows how to "join" on the two. So, everything up to the last two lines is just setting up dummy data sets to model your example and then the search/stats does sort of what you are looking to do - you can just copy/paste this to a search window | makeresults count=1000 | fields - _time | eval SwitchID=printf("Switch%02d",random() % 5) | eval Mac=printf("00-B0-D0-63-C2-%02d", random() % 10) | eval index="A" | append [ | makeresults count=1000 | fields - _time | eval r=random() % 10 | eval Mac=printf("00-B0-D0-63-C2-%02d", r) | eval dhcp_host_name=printf("Host%02d", r) | eval index="B", source="/var/logs/devices.log" | fields - r ] | eval r=random() % 10 | sort r | fields - r ``` Now we have a bunch of rows from index A and B``` | search (index="A" SwitchID=switch01) OR (index="B" source="/var/logs/devices.log") | stats count values(dhcp_host_name) as dhcp_host_name values(SwitchID) as SwitchID by Mac Hope this helps
You can also do it with streamstats with the last two lines of this example - note the field name Log_text, with the _ in the middle, as the reset_after statement doesn't like spaces in the field nam... See more...
You can also do it with streamstats with the last two lines of this example - note the field name Log_text, with the _ in the middle, as the reset_after statement doesn't like spaces in the field name. | makeresults format=csv data="Row,Time,Log_text 1,7:00:00am,connected 2,7:30:50am,disconnected 3,7:31:30am,connected 4,8:00:10am,disconnected 5,8:10:30am,disconnected" | eval _time=strptime(Time, "%H:%M:%S") | sort - _time | streamstats time_window=120s reset_after="("Log_text=\"disconnected\"")" count | where count=1 AND Log_text="disconnected"  
Everytime we have to force replication on the SH nodes of a SH Cluster, the inputs.conf replicates and overwrites the hostname. Is there anyway to blacklist a .conf file by location to prevent it rep... See more...
Everytime we have to force replication on the SH nodes of a SH Cluster, the inputs.conf replicates and overwrites the hostname. Is there anyway to blacklist a .conf file by location to prevent it replicating when you do a forced resync of the SH nodes?
Usually you don't keep your indexes on the same filesystem than your splunk binaries and configurations are. Try to add some more disk space (I prefer to use LVM on linux) and start to use splunk volu... See more...
Usually you don't keep your indexes on the same filesystem than your splunk binaries and configurations are. Try to add some more disk space (I prefer to use LVM on linux) and start to use splunk volumes. With those your life is much easier. There are many (or at least some) answers where we have discussed those. Also you should read more about those from docs.
@luizlimapg is correct.  If you copy and paste your search into Simple XML code window (or Dashboard Studio code window for that matter), some special characters will be interpreted by the XML engine... See more...
@luizlimapg is correct.  If you copy and paste your search into Simple XML code window (or Dashboard Studio code window for that matter), some special characters will be interpreted by the XML engine (or the JSON engine).  If you need to do that, use HTML entities to represent these special characters. It is best to avoid this, however.  If you have a panel, copy and paste your search code into the Search popup. (Similarly in the search box under Input.)
This is one of few occasions that transaction command is appropriate.  Something like   | rename "Log text" as LogText | transaction maxspan=120s startswith="LogText = disconnected" endswith="LogTe... See more...
This is one of few occasions that transaction command is appropriate.  Something like   | rename "Log text" as LogText | transaction maxspan=120s startswith="LogText = disconnected" endswith="LogText = connected" keeporphans=true | where isnull(closed_txn)   Your mock data would give LogText Row _time closed_txn duration eventcount field_match_sum linecount disconnected 5 2024-12-17 08:10:30           disconnected 4 2024-12-17 08:00:10           Here is an emulation of your mock data.   | makeresults format=csv data="Row, _time, Log text 1, 7:00:00am, connected 2, 7:30:50am, disconnected 3, 7:31:30am, connected 4, 8:00:10am, disconnected 5, 8:10:30am, disconnected" | eval _time = strptime(_time, "%I:%M:%S%p") | sort - _time ``` data emulation above ```   Play with the emulation and compare with real data.
Hello isoutamo, Thanks for your help!  I was able to log into one of the indexers and manually set frozenTimePeriodInSecs to a lower value.  This seemed to then allow me to Validate and Check, and t... See more...
Hello isoutamo, Thanks for your help!  I was able to log into one of the indexers and manually set frozenTimePeriodInSecs to a lower value.  This seemed to then allow me to Validate and Check, and then Push the new bundle from the Cluster Manager. So, it seems things are much more stable and the errors and warnings have disappeared.  But my indexers are still showing about 94% full for the /opt/splunk folder.
Are you trying to install the most recent version of SOAR? If so, upgrade to postgresql 15 if you can. The documentation is unclear but that's essentially required for 6.3. We ran into trouble trying... See more...
Are you trying to install the most recent version of SOAR? If so, upgrade to postgresql 15 if you can. The documentation is unclear but that's essentially required for 6.3. We ran into trouble trying to upgrade with postgresql 12. I can only imagine 11 has problems as well.
Yes, thank you @bowesmana