All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,   I have a lookup with url like  url www.url.com .url.com site.url.com   And i try to match it with my proxy logs to check if users access it. But i have issues with ".u... See more...
Hello,   I have a lookup with url like  url www.url.com .url.com site.url.com   And i try to match it with my proxy logs to check if users access it. But i have issues with ".url.com" since it don't exactly matches the hostname. I have tried to replace them with "*.url.com" but splunk lookup don't match wildcard. I have tried things like this but nothing worked : | inputlookup all_url.csv | rename url as lookup_url | join type=inner [ search index=my-proxy | eval lookup_url="*" . lookup_url . "*" | search hostname=lookup_url ] Do you have any idea ? Thanks
I'm working to try and automate the creation of muting rules for our maintenance windows, I've been looking around to see if there is a way to use the API to create a muting rule, but I'm not finding... See more...
I'm working to try and automate the creation of muting rules for our maintenance windows, I've been looking around to see if there is a way to use the API to create a muting rule, but I'm not finding anything, does this not exist? Is there an existing integration with Service-Now that would do this that I'm just not finding? I'm hoping to tie into our change management system to have these muting windows created automatically upon approval. 
Hey, Did you ever find a way to do this?
I appreciate everyone's input on this! I ended up deploying RHEL 8 servers for now. I will nudge them towards RHEL 9 when they are ready to upgrade the version of their Splunk cluster.  Thanks! Dan... See more...
I appreciate everyone's input on this! I ended up deploying RHEL 8 servers for now. I will nudge them towards RHEL 9 when they are ready to upgrade the version of their Splunk cluster.  Thanks! Daniel
I'm trying to get the product to do what the examples show it doing.  On the Events tab, I see the Pie chart responding to a click event and updating the screen to show which wedge was clicked.  I'm ... See more...
I'm trying to get the product to do what the examples show it doing.  On the Events tab, I see the Pie chart responding to a click event and updating the screen to show which wedge was clicked.  I'm simply trying to re-create what is shown to be a feature of the product. Could I accomplish the same thing in a different way?  Of course.  But I'm trying to learn how to use the actual features of the product.  This is obviously a feature since I can see it working on the Event tab of the page linked above.  I just don't know HOW to do what I see it doing.
Thank you for your reply. At this stage, we are testing the system and currently only receiving data from a single device. Our goal is to demonstrate the value of Splunk to our clients so they can b... See more...
Thank you for your reply. At this stage, we are testing the system and currently only receiving data from a single device. Our goal is to demonstrate the value of Splunk to our clients so they can begin using it. However, before reaching that point, we need to resolve these types of issues to ensure a smooth production environment where clients can rely on their data being available. We were able to revert to a previous snapshot, which helped restore the system to a cleaner state. However, I am now focused on finding a solution to prevent the system from hitting the 500 MB index data limit, in order to avoid license violations until we are ready to move to an enterprise license. Any advice on how to adjust the system or prevent this issue would be greatly appreciated. Thank you again for your assistance.
I cannot download the splunk Enterprise . Once i click on download all i download is a zip file with .tgz extension 
Hi @Zorghost , It isn't so clear because you have th same information available on Splunk and in dynamic way instead in static way on the share. Anyway, you have to define a search to extract only ... See more...
Hi @Zorghost , It isn't so clear because you have th same information available on Splunk and in dynamic way instead in static way on the share. Anyway, you have to define a search to extract only the fields you need, not all the full events; in this way, you'll reduct so much the number of data to extract. Ciao. Giuseppe
Hello @kiran_panchavat , Thanks for your response, However, i checked and found that SElinux is already permissive(0) here  [acnops_splunk@IEM***** ~]$ getenforce Permissive   Also, i did c... See more...
Hello @kiran_panchavat , Thanks for your response, However, i checked and found that SElinux is already permissive(0) here  [acnops_splunk@IEM***** ~]$ getenforce Permissive   Also, i did curl from local server and don't find any connection error showing in the below output    [acnops_splunk@IEM****** ~]$ curl -v http://<serverip>:8000 * Rebuilt URL to: http://<serverip>:8000/ * Trying <serverip>... * TCP_NODELAY set * Connected to <serverip> port 8000 (#0) > GET / HTTP/1.1 > Host: <serverip>:8000 > User-Agent: curl/7.61.1 > Accept: */* > < HTTP/1.1 303 See Other < Date: Fri, 07 Feb 2025 13:30:56 GMT < Content-Type: text/html; charset=UTF-8 < X-Content-Type-Options: nosniff < Content-Length: 339 < Location: http://<serverip>:8000/en-US/ < Vary: Accept-Language < Connection: Keep-Alive < X-Frame-Options: SAMEORIGIN < Server: Splunkd < <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><meta http-equiv="refresh" content="1;url=http://<serverip>:8000/en-US/"><title>303 See Other</title></head><body><h1>See Other</h1><p>The resource has moved temporarily <a href="http://<serverip>/en-US/">here</a>.</p></body></html> * Connection #0 to host <serverip> left intact  
If I am understanding this the common field between the two sets of events is Ip Address ? Try   | stats values(*) AS * by ip_add. Then after the stats command you can do your renaming.
Thank you for the reply @gcusello , I want to extract the data from that index -> process it -> send it to a file share. The issue is that I can´t work with data that is more than 20 MB in the plat... See more...
Thank you for the reply @gcusello , I want to extract the data from that index -> process it -> send it to a file share. The issue is that I can´t work with data that is more than 20 MB in the platform that I am using to automate this process. Therefore, I m looking for a more specific query to get smaller size data.
Hi @Zorghost , sorry but it isn't clear for me what do you want to do: what do you mean with "archive"? Splunk audit logs are in the index _audit that by default is maintained fro 6 years. In add... See more...
Hi @Zorghost , sorry but it isn't clear for me what do you want to do: what do you mean with "archive"? Splunk audit logs are in the index _audit that by default is maintained fro 6 years. In addition I don't understand what do you mea with 900 MB/day, maybe do you extract these data? why? Anyway, you could group data that are relevant for you and extract only them. If you want, you could extract grouped data in a summary index and store in that index these data. Ciao. Giuseppe
@rukshar  Check if SELinux is Blocking Access. After upgrading to RHEL 8.10, SELinux policies may restrict Splunk Web. Temporarily disable SELinux and test: sudo setenforce 0 If this resolves the ... See more...
@rukshar  Check if SELinux is Blocking Access. After upgrading to RHEL 8.10, SELinux policies may restrict Splunk Web. Temporarily disable SELinux and test: sudo setenforce 0 If this resolves the issue, permanently disable SELinux by modifying the configuration and Reboot the server sudo vi /etc/selinux/config  Change: SELINUX=enforcing → SELINUX=disabled Try accessing the Splunk UI locally from the server to confirm if the firewall is blocking external acces curl -v <http://splunkipadd>:8000 Check Splunk logs for any web service issues: cat /opt/splunk/var/log/splunk/web_service.log
Hello everyone, I am planning to automate a process where we need to archive admin activity for splunk application. For that I would require a query to fetch all the privileged actions conducted by... See more...
Hello everyone, I am planning to automate a process where we need to archive admin activity for splunk application. For that I would require a query to fetch all the privileged actions conducted by admins inside splunk application. My first thought is to use the following query: index=_audit sourcetype="audittrial" action=edit* OR action=create* OR action=delete* OR action=restart* Unfortunately, this query is emitting a lot of data ( around 900MB per day ) which the platform that I am using for automation can´t work with.  => Is there maybe any query that I can use to get the data I need in a more specific way to the point where it reduces the size to 20 MB or something ? I would appreciate any help and thank you in advance !  
Good day,   I'm hoping someone smarter than me can help me figure this out. In the search below, I'm trying to correlate a set of events from two different indexes. IndexA has network switch connec... See more...
Good day,   I'm hoping someone smarter than me can help me figure this out. In the search below, I'm trying to correlate a set of events from two different indexes. IndexA has network switch connection logs, and IndexB has dhcp hostname mappings. I want to combine the information from both. IndexA has a unique SessionID value that I'm using to differeniate individual connection attempts, and I want to have my stats table summarize by this field only so I can see informatino per a connection attempt. Index B does not have this field, however. For reference, in the narrow time range I'm working within, there are only two SessionID's for the same MAC/IP address pair. Likewise, there's only a single set of unique DHCP logs. This was done deliberately by unplugging/plugging a computer into the network twice to generate two connection attempts while keeping the same DHCP information. Because the SessionID field is not in the second index, I ran the first stats command summarize my events on the two fields the two indexes did share: the MAC and IP Address. With the individual events now gone and everything summarized, I ran the second stats command to then summarize by the Session_ID. This does work except for one flaw. As stated above, there are only two Session_ID's contained within two events, each with their own _time field. Because I use the values() function of stats, both timestamps are printed as a multi-value field in each row of the stats table, and you're not able to tell which timestamp belongs to which Session_ID. I've tried different permutations of things, such as using mvexpand between the stats command to split the time_indexA(I'm not interested in charting the time for events from indexB) field back into individual events. I've also tried summarizing by time in the first stats command alongside the MAC/IP address. I attempted using eventstats as well, but it's not a command I'm very familiar with, so that didn't work either. And finally, when I do manage to make some progress with correlating each timestamp to its own event, so far, I've alwasy lost the hostname field from indexB as a byproduct. I've attached a picture of the table when run in case my explanation is subpar.   (index=indexA) OR (index=indexB) | rex field=text "AuditSessionID (?<SessionID>\w+)" | rex field=pair "session-id=(?<SessionID>\w+)" | eval time_{index}=strftime(_time,"%F %T") | eval ip_add=coalesce(IP_Address, assigned_ip), mac_add=coalesce(upper(src_mac), upper(mac)) | eval auth=case(CODE=45040, "True", true(), "False") | stats values(host_name) as hostname values(networkSwitch) as Switch values(switchPort) as Port values(auth) as Auth values(SessionID) as Session_ID values(time_indexA) as time by mac_add, ip_add | stats values(time) as time values(hostname) as hostname values(Switch) as Switch values(Port) as Port values(Auth) as Auth values(ip_add) as IP_Address values(mac_add) as MAC_Address by Session_ID  
@solg  As far as I know, you can send the TRAPS using HEC token or via Syslog. Kindly check the below add-on. This Add-on is intended to be installed on Splunk Search Heads or HF's and where Splunk ... See more...
@solg  As far as I know, you can send the TRAPS using HEC token or via Syslog. Kindly check the below add-on. This Add-on is intended to be installed on Splunk Search Heads or HF's and where Splunk HEC is configured for Proofpoint TRAP. As of now, there is no official Splunk Add-On specifically designed for integrating Proofpoint Threat Response Auto-Pull (TRAP) Cloud with Splunk. However, the "CCX Extensions for Proofpoint Products" app on Splunkbase includes a component named proofpoint:trap:hec, which is intended for integrating Proofpoint TRAP with Splunk. https://splunkbase.splunk.com/app/6339 
@solg It looks like there is nothing publicly available. We had to reach out to Proofpoint for the py script to get TRAP data in. It sounds like a question for ProofPoint.  You can download the APP ... See more...
@solg It looks like there is nothing publicly available. We had to reach out to Proofpoint for the py script to get TRAP data in. It sounds like a question for ProofPoint.  You can download the APP and related TA's here: App: https://splunkbase.splunk.com/app/3727/#/details Gateway TA: https://splunkbase.splunk.com/app/3080/ TAP TA: https://splunkbase.splunk.com/app/3681/  
TRAP Cloud has an API to export information, but there is no Add-On to integrate TRAP Cloud with Splunk Has anyone made this integration succesfully? Is there intention to implement a supported Add... See more...
TRAP Cloud has an API to export information, but there is no Add-On to integrate TRAP Cloud with Splunk Has anyone made this integration succesfully? Is there intention to implement a supported Add-On on Splunk to integrate TRAP Cloud?
We succesfully configured FortiWeb SaaS -> Splunk SSL syslog via inputs.conf [tcp-ssl:6514] index = <index> sourcetype = fwbcld_log disabled = 0 [SSL] requireClientCert = false
Thanks @bowesmana  , it worked for me.  Another question,  Is it possible to fetch only the latest record with latest END_TIME when we have multiple records with different END_TIME.  Currently, if... See more...
Thanks @bowesmana  , it worked for me.  Another question,  Is it possible to fetch only the latest record with latest END_TIME when we have multiple records with different END_TIME.  Currently, if there are 2 records with the different END_TIME for the same JOBNAME, we have 2 records.  Is it possible to display only 1 record per jobname with the latest END_TIME ??