All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Do the Google and ChatGPT tricks include this? ... | chart sum(cost) AS total_cost BY bill_date | eval total_cost = "$" . total_cost I'm not entirely sure it will work, but worth a try.
In that setup, I had a packet capture running on the server (Win2022) and never saw it even attempt to connect to the HEC.  I sent curls to the HEC and got good results from the same server.
Configuration: inputs.conf [udp://1514] connection_host = dns host = SERVERA sourcetype = pan:firewall props.conf [source::udp:1514] TRANSFORMS-route = route_to_hec   transforms.conf ... See more...
Configuration: inputs.conf [udp://1514] connection_host = dns host = SERVERA sourcetype = pan:firewall props.conf [source::udp:1514] TRANSFORMS-route = route_to_hec   transforms.conf [route_to_hec] REGEX = . DEST_KEY = _HTTP_ROUTING FORMAT = sandbox_hec Outputs.conf [httpout] defaultGroup = sandbox_hec indexAndForward = false disabled = false [httpout:sandbox_hec] httpEventCollectorToken = <omitted> uri = https://something.something.com:443 sslVerifyServerCert = false disabled = false  
You shouldn't receive syslog directly on a HF anyway. Out of curiosity, why? Here's my scenario: I have one device type I'm receiving traffic from.  Palo Alto Firewall (3-5 at the most).  I'm no... See more...
You shouldn't receive syslog directly on a HF anyway. Out of curiosity, why? Here's my scenario: I have one device type I'm receiving traffic from.  Palo Alto Firewall (3-5 at the most).  I'm not mixing multiple devices over the same port.  I would never send the traffic to 514, because it is sitting behind the root user.  It takes seconds to switch to a non-root port. The traffic will be sent on UDP-1514, because if I send it on TCP-1514 I'll be restarting the syslog service on the Palo every other week.  Yes, this has been a problem with multiple environments and versions of PANOS. I have a temporary need to capture ~90 Days worth of traffic.  After that, the HF and the syslog will be shutdown. I am not trying to record all logs for posterity/security reasons. What I need is something that can be setup in under an hour with minimal config, minimal server knowledge, and can run reliably for 90 days to ingest syslog and send it via HTTPS to the internet Splunk Environment.  
Hi @rcbutterfield , you should try something like this: index=wineventlog | stats values(EventCode) AS EventCode count By eventtype Ciao. Giuseppe
Hello Splunk People.... I want to return a search within splunk.  THe index is wineventlogs and i want to return all the eventcodes within eventtypes.   Meaning....  Eventtype A includes eventcode... See more...
Hello Splunk People.... I want to return a search within splunk.  THe index is wineventlogs and i want to return all the eventcodes within eventtypes.   Meaning....  Eventtype A includes eventcode 5144, 5145, 5146 Eventtype b includes eventcode 5144, 5166, 5167 As examples....   thanks to all
Hello @PiotrAp  Maybe you can do something like that? I added a second streamstats to keep only the results who don't have an associated event (No 1000 event for a 1001, and no 1001 for a 1000), ... See more...
Hello @PiotrAp  Maybe you can do something like that? I added a second streamstats to keep only the results who don't have an associated event (No 1000 event for a 1001, and no 1001 for a 1000), and also remove the closest 1000 event to a successful 1001 : | makeresults | eval event_id=1000, username="test", Computer="xx1", _time=strptime("2025-06-30 16:26:27.01", "%Y-%m-%d %H:%M:%S.%N"), resource="example1" | append [| makeresults | eval event_id=1000, username="test", Computer="xx2", _time=strptime("2025-06-30 16:26:27.02", "%Y-%m-%d %H:%M:%S.%N"), resource="example2"] | append [| makeresults | eval event_id=1001, username="test", _time=strptime("2025-06-30 16:26:27.03", "%Y-%m-%d %H:%M:%S.%N"), resource="example3"] | append [| makeresults | eval event_id=1000, username="truc", Computer="yyy", _time=strptime("2025-06-30 16:26:29", "%Y-%m-%d %H:%M:%S"), resource="example2"] | append [| makeresults | eval event_id=1001, username="truc", Computer="yyy", _time=strptime("2025-06-30 16:26:32", "%Y-%m-%d %H:%M:%S"), resource="example3"] | sort _time | streamstats time_window=1s count as nb last(event_id) AS current_event_id, last(eval(if(event_id=1000,event_id,null()))) AS previous_event_id, last(eval(if(event_id=1000,_time,null()))) AS previous_time, last(eval(if(event_id=1000,Computer,null()))) as previous_computer, last(resource) AS current_resource by username | eval status = if(current_event_id=1001 and previous_event_id=1000,"SUCCESS","FAILURE") | reverse | streamstats time_window=1s max(eval(if(event_id=1000,nb,null()))) as max_nb values(status) as statuses by username | where mvcount(statuses)=1 or nb!=max_nb | fields - statuses current_event_id current_resource max_nb nb previous_event_id The query is not very elegant, but works if I understood well what you want. Maybe someone will have a prettier solution Don't hesitate to tell me if it suits your need
Hi @livehybrid , thank you very much for the reply, I corrected the 'clicked_uf' in my SPL table, it was my oversight, but the problem persists. Regarding my URL not updating with the clicked_uf, it ... See more...
Hi @livehybrid , thank you very much for the reply, I corrected the 'clicked_uf' in my SPL table, it was my oversight, but the problem persists. Regarding my URL not updating with the clicked_uf, it doesn't even appear in the MAPA URL, it seems that it doesn't recognize the click.
Hi,  I have simple chart visulization, with base SPL .... | chart sum(cost) AS total_cost BY bill_date  I'm trying to add a "$" format to total_cost to show on the columns. Tried every Google and ... See more...
Hi,  I have simple chart visulization, with base SPL .... | chart sum(cost) AS total_cost BY bill_date  I'm trying to add a "$" format to total_cost to show on the columns. Tried every Google and ChatGPT trick and I can't get it working. Any ideas? Thanks!
Any update on the query shared
Good point! My refresh interval would have definitely been set to more than 10 seconds, so good call on that. Unfortunately, I spoke too soon—this solution won’t work as it refreshes the dashboard fo... See more...
Good point! My refresh interval would have definitely been set to more than 10 seconds, so good call on that. Unfortunately, I spoke too soon—this solution won’t work as it refreshes the dashboard for all users, which isn’t the intended behaviour. I still need something similar to looping through browser tabs, but without requiring the installation of an extension.
I can share the log file from the migration, however, i don't see an option here to upload. PLease do you know if there a  way to share/upload log files.   Thanks
While that is doable with JS code in simpleXML dashboard it's worth considering what's the goal here. I mean each load of the dashboard (unless it uses a scheduled report to power the widgets) will ... See more...
While that is doable with JS code in simpleXML dashboard it's worth considering what's the goal here. I mean each load of the dashboard (unless it uses a scheduled report to power the widgets) will execute searches. If their results aren't supposed to change you might - instead of go for rotating dashboards, create one bigger dashboard and use JS to toggle visibility of panels - this will give you the same result carouseling the panels but will not stress the server by spawning the same searches repeatedly.
As I said before - move an upgrade in one move was risky. I don't see the point in uninstalling and reinstalling the package.
Hi @PickleRick  I did the following in my test environment and migration is successful. PLease let me know your thoughts on this procedure   * Installed Splunk 9.2.7 on a fresh AL2023 server, ve... See more...
Hi @PickleRick  I did the following in my test environment and migration is successful. PLease let me know your thoughts on this procedure   * Installed Splunk 9.2.7 on a fresh AL2023 server, verified that the web console was accessible, and confirmed I could log in. * Stopped the Splunk service * Copied the /opt/splunk/etc/ and /opt/splunk/var/lib/splunk directories from the 8.2.2.1 server to the new server * Mounted the necessary volumes from the old server to the new one, ensuring the index data was available * Uninstalled Splunk 9.2.7 * Noted that, after uninstalling, the etc and db directories remained intact. * Reinstalled Splunk 9.2.7 * During the initial start with sudo /opt/splunk/bin/splunk start --accept-license, I observed that Splunk successfully migrated the configuration and settings in etc to be compatible with 9.2.7. * I now have a fully functional Splunk v9.2.7 instance with all historical indexed data present
Hi, This is what did the trick for me - save it as "dashboard-carousel.js" (function() { // List of dashboard URLs to cycle through const urls = [ "http://10.......", "http:... See more...
Hi, This is what did the trick for me - save it as "dashboard-carousel.js" (function() { // List of dashboard URLs to cycle through const urls = [ "http://10.......", "http://10.......", "http://10......." ]; // Time interval for cycling (in milliseconds) const interval = 10000; // 10 seconds // Get the current index from the URL query parameter (default to 0) const urlParams = new URLSearchParams(window.location.search); let currentIndex = parseInt(urlParams.get("index")) || 0; // Function to redirect to the next dashboard function cycleDashboards() { // Increment the index for the next cycle currentIndex = (currentIndex + 1) % urls.length; // Redirect to the next URL with the updated index as a query parameter window.location.href = `${urls[currentIndex]}?index=${currentIndex}`; } // Start the cycling process after the specified interval setTimeout(cycleDashboards, interval); })(); Then reference it like this in your dashboard XML <dashboard version="1.0" hideChrome="true"  script="dashboard-carousel.js">
I'm not familiar with these add ons so I'm not sure how your process works. If you're indeed receiving data on the HEC input, it's up to you on the source side to export only a subset of your events.... See more...
I'm not familiar with these add ons so I'm not sure how your process works. If you're indeed receiving data on the HEC input, it's up to you on the source side to export only a subset of your events. That's usually the most effective way because it's better to not send the data than to send it, receive and then filter out wasting resources on stuff you don't need and don't want. If you cannot do that, the document I pointed you to describes how you can filter your events.
@Jayanthan  In this case, as mentioned before you can use ingest action. It allows you to filter, mask, or route events before they are indexed. For reference check this out #https://lantern.splu... See more...
@Jayanthan  In this case, as mentioned before you can use ingest action. It allows you to filter, mask, or route events before they are indexed. For reference check this out #https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Using_ingest_actions_in_Splunk_Enterprise Settings > Data > Ingest Actions and create rule set. Alternatively, you can use the traditional method with your Heavy Forwarder by configuring props.conf and transforms.conf Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @RanjiRaje  Rather than trying to piggyback on someone else's issue, please start a new topic in Answers with details of your specific usecase, what you have tried so far, and the issues you are ... See more...
Hi @RanjiRaje  Rather than trying to piggyback on someone else's issue, please start a new topic in Answers with details of your specific usecase, what you have tried so far, and the issues you are facing.
Hi @livehybrid  I am right now using "Splunk Add on for Microsoft Cloud services" and Splunk Add on for AWS"  using HEC to collect logs from the cloud