All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Pranita_P  For each table click on the Visualisation dropdown on the right and select the "When data is unavailable, hide element" checkbox. This means that the table will not show when there ar... See more...
Hi @Pranita_P  For each table click on the Visualisation dropdown on the right and select the "When data is unavailable, hide element" checkbox. This means that the table will not show when there are no results. Now you need to make it return no results when value5 is/isnt selected depending on your dropdown. You can do this with a where statement on each table (it might not be the most efficient to use a where statement here but not sure how else you could achieve it). You would add something like: | where "$yourDropdownToken$"="value 5" Obviously you can change = to != as required.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have 1 drop down having 5 values(value 1 ,value 2,value 3,value 4,value 5) in it. i have assigned a token to this drop down  For (value 1 ,value 2,value 3,value 4) i have a table(table 1) belo... See more...
I have 1 drop down having 5 values(value 1 ,value 2,value 3,value 4,value 5) in it. i have assigned a token to this drop down  For (value 1 ,value 2,value 3,value 4) i have a table(table 1) below drop down  where the results change as per values of the drop down selected. It uses 1 query underneath to get details what i want is to have another another table(table 2) which should display in place of table 1 [hide table 1 and display 2 instead ] when value 5 is selected from drop down.  This needs to use different query  How to achieve this ?
OK. You can't have multiple httpout groups. It's not tcpout. At this point you have either tcpout (possibly multiple output groups) or a single httpout output. And there is no _HTTP_ROUTING key. It m... See more...
OK. You can't have multiple httpout groups. It's not tcpout. At this point you have either tcpout (possibly multiple output groups) or a single httpout output. And there is no _HTTP_ROUTING key. It mistakenly appeared in the docs back around 7.something version but was removed since it was an error.
Thank you!  it works perfect! 
There are several cons to receiving syslog directly on a HF (or UF). - it's more complicated to manage - Splunk doesn't reliably capture network-level metadata so for receiving different types of so... See more...
There are several cons to receiving syslog directly on a HF (or UF). - it's more complicated to manage - Splunk doesn't reliably capture network-level metadata so for receiving different types of sources you need to bend over backwards, use multiple ports and/or do strange things in index-time. - it's usually more resource intensive than using dedicated syslog daemons - it's more robust to use a separate syslog component - especially with UDP transport and especially with HF which can take significant time to restart when needed causing holes in your received data. In some situations (as your might as well be) it's "good enough" but I'd rather use a dedicated syslog component in prod.
Do the Google and ChatGPT tricks include this? ... | chart sum(cost) AS total_cost BY bill_date | eval total_cost = "$" . total_cost I'm not entirely sure it will work, but worth a try.
In that setup, I had a packet capture running on the server (Win2022) and never saw it even attempt to connect to the HEC.  I sent curls to the HEC and got good results from the same server.
Configuration: inputs.conf [udp://1514] connection_host = dns host = SERVERA sourcetype = pan:firewall props.conf [source::udp:1514] TRANSFORMS-route = route_to_hec   transforms.conf ... See more...
Configuration: inputs.conf [udp://1514] connection_host = dns host = SERVERA sourcetype = pan:firewall props.conf [source::udp:1514] TRANSFORMS-route = route_to_hec   transforms.conf [route_to_hec] REGEX = . DEST_KEY = _HTTP_ROUTING FORMAT = sandbox_hec Outputs.conf [httpout] defaultGroup = sandbox_hec indexAndForward = false disabled = false [httpout:sandbox_hec] httpEventCollectorToken = <omitted> uri = https://something.something.com:443 sslVerifyServerCert = false disabled = false  
You shouldn't receive syslog directly on a HF anyway. Out of curiosity, why? Here's my scenario: I have one device type I'm receiving traffic from.  Palo Alto Firewall (3-5 at the most).  I'm no... See more...
You shouldn't receive syslog directly on a HF anyway. Out of curiosity, why? Here's my scenario: I have one device type I'm receiving traffic from.  Palo Alto Firewall (3-5 at the most).  I'm not mixing multiple devices over the same port.  I would never send the traffic to 514, because it is sitting behind the root user.  It takes seconds to switch to a non-root port. The traffic will be sent on UDP-1514, because if I send it on TCP-1514 I'll be restarting the syslog service on the Palo every other week.  Yes, this has been a problem with multiple environments and versions of PANOS. I have a temporary need to capture ~90 Days worth of traffic.  After that, the HF and the syslog will be shutdown. I am not trying to record all logs for posterity/security reasons. What I need is something that can be setup in under an hour with minimal config, minimal server knowledge, and can run reliably for 90 days to ingest syslog and send it via HTTPS to the internet Splunk Environment.  
Hi @rcbutterfield , you should try something like this: index=wineventlog | stats values(EventCode) AS EventCode count By eventtype Ciao. Giuseppe
Hello Splunk People.... I want to return a search within splunk.  THe index is wineventlogs and i want to return all the eventcodes within eventtypes.   Meaning....  Eventtype A includes eventcode... See more...
Hello Splunk People.... I want to return a search within splunk.  THe index is wineventlogs and i want to return all the eventcodes within eventtypes.   Meaning....  Eventtype A includes eventcode 5144, 5145, 5146 Eventtype b includes eventcode 5144, 5166, 5167 As examples....   thanks to all
Hello @PiotrAp  Maybe you can do something like that? I added a second streamstats to keep only the results who don't have an associated event (No 1000 event for a 1001, and no 1001 for a 1000), ... See more...
Hello @PiotrAp  Maybe you can do something like that? I added a second streamstats to keep only the results who don't have an associated event (No 1000 event for a 1001, and no 1001 for a 1000), and also remove the closest 1000 event to a successful 1001 : | makeresults | eval event_id=1000, username="test", Computer="xx1", _time=strptime("2025-06-30 16:26:27.01", "%Y-%m-%d %H:%M:%S.%N"), resource="example1" | append [| makeresults | eval event_id=1000, username="test", Computer="xx2", _time=strptime("2025-06-30 16:26:27.02", "%Y-%m-%d %H:%M:%S.%N"), resource="example2"] | append [| makeresults | eval event_id=1001, username="test", _time=strptime("2025-06-30 16:26:27.03", "%Y-%m-%d %H:%M:%S.%N"), resource="example3"] | append [| makeresults | eval event_id=1000, username="truc", Computer="yyy", _time=strptime("2025-06-30 16:26:29", "%Y-%m-%d %H:%M:%S"), resource="example2"] | append [| makeresults | eval event_id=1001, username="truc", Computer="yyy", _time=strptime("2025-06-30 16:26:32", "%Y-%m-%d %H:%M:%S"), resource="example3"] | sort _time | streamstats time_window=1s count as nb last(event_id) AS current_event_id, last(eval(if(event_id=1000,event_id,null()))) AS previous_event_id, last(eval(if(event_id=1000,_time,null()))) AS previous_time, last(eval(if(event_id=1000,Computer,null()))) as previous_computer, last(resource) AS current_resource by username | eval status = if(current_event_id=1001 and previous_event_id=1000,"SUCCESS","FAILURE") | reverse | streamstats time_window=1s max(eval(if(event_id=1000,nb,null()))) as max_nb values(status) as statuses by username | where mvcount(statuses)=1 or nb!=max_nb | fields - statuses current_event_id current_resource max_nb nb previous_event_id The query is not very elegant, but works if I understood well what you want. Maybe someone will have a prettier solution Don't hesitate to tell me if it suits your need
Hi @livehybrid , thank you very much for the reply, I corrected the 'clicked_uf' in my SPL table, it was my oversight, but the problem persists. Regarding my URL not updating with the clicked_uf, it ... See more...
Hi @livehybrid , thank you very much for the reply, I corrected the 'clicked_uf' in my SPL table, it was my oversight, but the problem persists. Regarding my URL not updating with the clicked_uf, it doesn't even appear in the MAPA URL, it seems that it doesn't recognize the click.
Hi,  I have simple chart visulization, with base SPL .... | chart sum(cost) AS total_cost BY bill_date  I'm trying to add a "$" format to total_cost to show on the columns. Tried every Google and ... See more...
Hi,  I have simple chart visulization, with base SPL .... | chart sum(cost) AS total_cost BY bill_date  I'm trying to add a "$" format to total_cost to show on the columns. Tried every Google and ChatGPT trick and I can't get it working. Any ideas? Thanks!
Any update on the query shared
Good point! My refresh interval would have definitely been set to more than 10 seconds, so good call on that. Unfortunately, I spoke too soon—this solution won’t work as it refreshes the dashboard fo... See more...
Good point! My refresh interval would have definitely been set to more than 10 seconds, so good call on that. Unfortunately, I spoke too soon—this solution won’t work as it refreshes the dashboard for all users, which isn’t the intended behaviour. I still need something similar to looping through browser tabs, but without requiring the installation of an extension.
I can share the log file from the migration, however, i don't see an option here to upload. PLease do you know if there a  way to share/upload log files.   Thanks
While that is doable with JS code in simpleXML dashboard it's worth considering what's the goal here. I mean each load of the dashboard (unless it uses a scheduled report to power the widgets) will ... See more...
While that is doable with JS code in simpleXML dashboard it's worth considering what's the goal here. I mean each load of the dashboard (unless it uses a scheduled report to power the widgets) will execute searches. If their results aren't supposed to change you might - instead of go for rotating dashboards, create one bigger dashboard and use JS to toggle visibility of panels - this will give you the same result carouseling the panels but will not stress the server by spawning the same searches repeatedly.
As I said before - move an upgrade in one move was risky. I don't see the point in uninstalling and reinstalling the package.
Hi @PickleRick  I did the following in my test environment and migration is successful. PLease let me know your thoughts on this procedure   * Installed Splunk 9.2.7 on a fresh AL2023 server, ve... See more...
Hi @PickleRick  I did the following in my test environment and migration is successful. PLease let me know your thoughts on this procedure   * Installed Splunk 9.2.7 on a fresh AL2023 server, verified that the web console was accessible, and confirmed I could log in. * Stopped the Splunk service * Copied the /opt/splunk/etc/ and /opt/splunk/var/lib/splunk directories from the 8.2.2.1 server to the new server * Mounted the necessary volumes from the old server to the new one, ensuring the index data was available * Uninstalled Splunk 9.2.7 * Noted that, after uninstalling, the etc and db directories remained intact. * Reinstalled Splunk 9.2.7 * During the initial start with sudo /opt/splunk/bin/splunk start --accept-license, I observed that Splunk successfully migrated the configuration and settings in etc to be compatible with 9.2.7. * I now have a fully functional Splunk v9.2.7 instance with all historical indexed data present