All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@rahusri2 This could be a bug in the REST endpoint on the backend '/servicesNS/admin/search/server/info' for that version/build. Are you using Splunk Cloud Trial? It's recommended to reach out to Spl... See more...
@rahusri2 This could be a bug in the REST endpoint on the backend '/servicesNS/admin/search/server/info' for that version/build. Are you using Splunk Cloud Trial? It's recommended to reach out to Splunk Support if this is happening in your production environment. If this helps, Please Upvote.  
@s_s Hello, checkout the queues on the hwf pipleine, and also see if you can apply  async forwarding. https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-raw... See more...
@s_s Hello, checkout the queues on the hwf pipleine, and also see if you can apply  async forwarding. https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat If this Helps, Please Upvote.
@fatsug Hello! The last_validated_bundle  differs from the active_bundlewhich identifies the bundle that was most recently applied and is currently active across the peer nodes. Refer: https:/... See more...
@fatsug Hello! The last_validated_bundle  differs from the active_bundlewhich identifies the bundle that was most recently applied and is currently active across the peer nodes. Refer: https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Updatepeerconfigurations#Use_the_CLI_to_validate_the_bundle_and_check_restart         if this Helps,  Please Upvote.
@kwangwon Splunkcloud  trial version is a standalone system and uses self-signed certs.  You can try using curl -k "https://ilove.splunkcloud.com:8088/services/collector"         If this reply ... See more...
@kwangwon Splunkcloud  trial version is a standalone system and uses self-signed certs.  You can try using curl -k "https://ilove.splunkcloud.com:8088/services/collector"         If this reply helps, Please Upvote.        
@SteveBowser  Checkout inputs.conf $decideOnStartup server.conf  hostnameOption = [ fullyqualifiedname | clustername | shortname ] If this reply helps, Please Upvote.
To build on what @MuS says, here's a simple example that simulates two data sets, the switch data (index A) and the devices data (index B) and the stats command shows how to "join" on the two. So, e... See more...
To build on what @MuS says, here's a simple example that simulates two data sets, the switch data (index A) and the devices data (index B) and the stats command shows how to "join" on the two. So, everything up to the last two lines is just setting up dummy data sets to model your example and then the search/stats does sort of what you are looking to do - you can just copy/paste this to a search window | makeresults count=1000 | fields - _time | eval SwitchID=printf("Switch%02d",random() % 5) | eval Mac=printf("00-B0-D0-63-C2-%02d", random() % 10) | eval index="A" | append [ | makeresults count=1000 | fields - _time | eval r=random() % 10 | eval Mac=printf("00-B0-D0-63-C2-%02d", r) | eval dhcp_host_name=printf("Host%02d", r) | eval index="B", source="/var/logs/devices.log" | fields - r ] | eval r=random() % 10 | sort r | fields - r ``` Now we have a bunch of rows from index A and B``` | search (index="A" SwitchID=switch01) OR (index="B" source="/var/logs/devices.log") | stats count values(dhcp_host_name) as dhcp_host_name values(SwitchID) as SwitchID by Mac Hope this helps
You can also do it with streamstats with the last two lines of this example - note the field name Log_text, with the _ in the middle, as the reset_after statement doesn't like spaces in the field nam... See more...
You can also do it with streamstats with the last two lines of this example - note the field name Log_text, with the _ in the middle, as the reset_after statement doesn't like spaces in the field name. | makeresults format=csv data="Row,Time,Log_text 1,7:00:00am,connected 2,7:30:50am,disconnected 3,7:31:30am,connected 4,8:00:10am,disconnected 5,8:10:30am,disconnected" | eval _time=strptime(Time, "%H:%M:%S") | sort - _time | streamstats time_window=120s reset_after="("Log_text=\"disconnected\"")" count | where count=1 AND Log_text="disconnected"  
Everytime we have to force replication on the SH nodes of a SH Cluster, the inputs.conf replicates and overwrites the hostname. Is there anyway to blacklist a .conf file by location to prevent it rep... See more...
Everytime we have to force replication on the SH nodes of a SH Cluster, the inputs.conf replicates and overwrites the hostname. Is there anyway to blacklist a .conf file by location to prevent it replicating when you do a forced resync of the SH nodes?
Usually you don't keep your indexes on the same filesystem than your splunk binaries and configurations are. Try to add some more disk space (I prefer to use LVM on linux) and start to use splunk volu... See more...
Usually you don't keep your indexes on the same filesystem than your splunk binaries and configurations are. Try to add some more disk space (I prefer to use LVM on linux) and start to use splunk volumes. With those your life is much easier. There are many (or at least some) answers where we have discussed those. Also you should read more about those from docs.
@luizlimapg is correct.  If you copy and paste your search into Simple XML code window (or Dashboard Studio code window for that matter), some special characters will be interpreted by the XML engine... See more...
@luizlimapg is correct.  If you copy and paste your search into Simple XML code window (or Dashboard Studio code window for that matter), some special characters will be interpreted by the XML engine (or the JSON engine).  If you need to do that, use HTML entities to represent these special characters. It is best to avoid this, however.  If you have a panel, copy and paste your search code into the Search popup. (Similarly in the search box under Input.)
This is one of few occasions that transaction command is appropriate.  Something like   | rename "Log text" as LogText | transaction maxspan=120s startswith="LogText = disconnected" endswith="LogTe... See more...
This is one of few occasions that transaction command is appropriate.  Something like   | rename "Log text" as LogText | transaction maxspan=120s startswith="LogText = disconnected" endswith="LogText = connected" keeporphans=true | where isnull(closed_txn)   Your mock data would give LogText Row _time closed_txn duration eventcount field_match_sum linecount disconnected 5 2024-12-17 08:10:30           disconnected 4 2024-12-17 08:00:10           Here is an emulation of your mock data.   | makeresults format=csv data="Row, _time, Log text 1, 7:00:00am, connected 2, 7:30:50am, disconnected 3, 7:31:30am, connected 4, 8:00:10am, disconnected 5, 8:10:30am, disconnected" | eval _time = strptime(_time, "%I:%M:%S%p") | sort - _time ``` data emulation above ```   Play with the emulation and compare with real data.
Hello isoutamo, Thanks for your help!  I was able to log into one of the indexers and manually set frozenTimePeriodInSecs to a lower value.  This seemed to then allow me to Validate and Check, and t... See more...
Hello isoutamo, Thanks for your help!  I was able to log into one of the indexers and manually set frozenTimePeriodInSecs to a lower value.  This seemed to then allow me to Validate and Check, and then Push the new bundle from the Cluster Manager. So, it seems things are much more stable and the errors and warnings have disappeared.  But my indexers are still showing about 94% full for the /opt/splunk folder.
Are you trying to install the most recent version of SOAR? If so, upgrade to postgresql 15 if you can. The documentation is unclear but that's essentially required for 6.3. We ran into trouble trying... See more...
Are you trying to install the most recent version of SOAR? If so, upgrade to postgresql 15 if you can. The documentation is unclear but that's essentially required for 6.3. We ran into trouble trying to upgrade with postgresql 12. I can only imagine 11 has problems as well.
Yes, thank you @bowesmana 
Thanks for the response. I've tweaked my logic to reduce the number of lines I need in my base search making sure I do a stats in my base search before the chain. Closing this out
Hi there, Have a read here https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Knowledge/Usesummaryindexing#Get_started_with_summary_indexing    cheers, MuS
Hi dtaylor, You have seen this https://community.splunk.com/t5/Splunk-Search/How-to-compare-fields-over-multiple-sourcetypes-without-join/m-p/113477 since you are already thinking of using `stats`  ... See more...
Hi dtaylor, You have seen this https://community.splunk.com/t5/Splunk-Search/How-to-compare-fields-over-multiple-sourcetypes-without-join/m-p/113477 since you are already thinking of using `stats`    The important thing is really to get a common field from the various data sets and use that in your stats in your case you could use the field `src_mac` as simple as  | stats values(*) AS * by _time src_mac after your base search should work as long as you get src_mac for all data sets.   Hope this helps ... cheers, MuS  
I've been working on a search that I *finally* managed to get working that would look for events generated by a provided network switch and port name and then gives me all the devices that have conne... See more...
I've been working on a search that I *finally* managed to get working that would look for events generated by a provided network switch and port name and then gives me all the devices that have connected to that specific port over a period of time. Fortunately, most of the device data is included alongside the events which contain the switch/port information.....that is....evenything except the hostname. Because of this, I've tried to use the join command to perform a second search through a second data set which contains the hostnames for all devices which have connected to the network and match those hostnames based on the shared MAC address field. The search works, and that's great, but it can only work over a time period of about a day or so before the subsearch breaks past the 50k event limit. Is these anyway I can get rid of the join command and maybe use the stats command instead? That's what simialr posts to this one seem to suggest, but I have trouble wrapping my head around how the stats command can be used to correlate data from two different events from different data sets.....in this case the dhcp_host_name getting matched to the corresponding device in my networking logs. I'll gladly take any assistance. Thank you.       index="indexA" log_type IN(Failed_Attempts, Passed_Authentications) IP_Address=* SwitchID=switch01 Port_Id=GigabitEthernet1/0/13 | rex field=message_text "\((?<src_mac>[A-Fa-f0-9]{4}\.[A-Fa-f0-9]{4}\.[A-Fa-f0-9]{4})\)" | eval src_mac=lower(replace(src_mac, "(\w{2})(\w{2})\.(\w{2})(\w{2})\.(\w{2})(\w{2})", "\1:\2:\3:\4:\5:\6")) | eval time=strftime(_time,"%Y-%m-%d %T") | join type=left left=L right=R max=0 where L.src_mac=R.src_mac L.IP_Address=R.src_ip [| search index="indexB" source="/var/logs/devices.log" | fields src_mac src_ip dhcp_host_name] | stats values(L.time) AS Time, count as "Count" by L.src_mac R.dhcp_host_name L.IP_Address L.SwitchID L.Port_Id  
I've piped a Splunk log query extract into a table showing disconnected and connected log entries sorted by time. NB row 1 is fine. Row 2 is fine because it connected within 120 sec. Now I want to ... See more...
I've piped a Splunk log query extract into a table showing disconnected and connected log entries sorted by time. NB row 1 is fine. Row 2 is fine because it connected within 120 sec. Now I want to show "disconnected" entries with no subsequent "connected" row say within a 120 sec time frame.  So, I want to pick up rows 4 and 5. Can someone advise on the Splunk query format for this? Table = Connect_Log Row Time Log text 1 7:00:00am connected 2 7:30:50am disconnected 3 7:31:30am connected 4 8:00:10am disconnected 5 8:10:30am disconnected
Hi @Ste, how are you? Is &gt; for > Your SPL is using &gr; instead.   | where my_time&gt;=relative_time(now(),"-1d@d") AND my_time&lt;=relative_time(now(),"@d")