All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I am understanding this the common field between the two sets of events is Ip Address ? Try   | stats values(*) AS * by ip_add. Then after the stats command you can do your renaming.
Thank you for the reply @gcusello , I want to extract the data from that index -> process it -> send it to a file share. The issue is that I can´t work with data that is more than 20 MB in the plat... See more...
Thank you for the reply @gcusello , I want to extract the data from that index -> process it -> send it to a file share. The issue is that I can´t work with data that is more than 20 MB in the platform that I am using to automate this process. Therefore, I m looking for a more specific query to get smaller size data.
Hi @Zorghost , sorry but it isn't clear for me what do you want to do: what do you mean with "archive"? Splunk audit logs are in the index _audit that by default is maintained fro 6 years. In add... See more...
Hi @Zorghost , sorry but it isn't clear for me what do you want to do: what do you mean with "archive"? Splunk audit logs are in the index _audit that by default is maintained fro 6 years. In addition I don't understand what do you mea with 900 MB/day, maybe do you extract these data? why? Anyway, you could group data that are relevant for you and extract only them. If you want, you could extract grouped data in a summary index and store in that index these data. Ciao. Giuseppe
@rukshar  Check if SELinux is Blocking Access. After upgrading to RHEL 8.10, SELinux policies may restrict Splunk Web. Temporarily disable SELinux and test: sudo setenforce 0 If this resolves the ... See more...
@rukshar  Check if SELinux is Blocking Access. After upgrading to RHEL 8.10, SELinux policies may restrict Splunk Web. Temporarily disable SELinux and test: sudo setenforce 0 If this resolves the issue, permanently disable SELinux by modifying the configuration and Reboot the server sudo vi /etc/selinux/config  Change: SELINUX=enforcing → SELINUX=disabled Try accessing the Splunk UI locally from the server to confirm if the firewall is blocking external acces curl -v <http://splunkipadd>:8000 Check Splunk logs for any web service issues: cat /opt/splunk/var/log/splunk/web_service.log
Hello everyone, I am planning to automate a process where we need to archive admin activity for splunk application. For that I would require a query to fetch all the privileged actions conducted by... See more...
Hello everyone, I am planning to automate a process where we need to archive admin activity for splunk application. For that I would require a query to fetch all the privileged actions conducted by admins inside splunk application. My first thought is to use the following query: index=_audit sourcetype="audittrial" action=edit* OR action=create* OR action=delete* OR action=restart* Unfortunately, this query is emitting a lot of data ( around 900MB per day ) which the platform that I am using for automation can´t work with.  => Is there maybe any query that I can use to get the data I need in a more specific way to the point where it reduces the size to 20 MB or something ? I would appreciate any help and thank you in advance !  
Good day,   I'm hoping someone smarter than me can help me figure this out. In the search below, I'm trying to correlate a set of events from two different indexes. IndexA has network switch connec... See more...
Good day,   I'm hoping someone smarter than me can help me figure this out. In the search below, I'm trying to correlate a set of events from two different indexes. IndexA has network switch connection logs, and IndexB has dhcp hostname mappings. I want to combine the information from both. IndexA has a unique SessionID value that I'm using to differeniate individual connection attempts, and I want to have my stats table summarize by this field only so I can see informatino per a connection attempt. Index B does not have this field, however. For reference, in the narrow time range I'm working within, there are only two SessionID's for the same MAC/IP address pair. Likewise, there's only a single set of unique DHCP logs. This was done deliberately by unplugging/plugging a computer into the network twice to generate two connection attempts while keeping the same DHCP information. Because the SessionID field is not in the second index, I ran the first stats command summarize my events on the two fields the two indexes did share: the MAC and IP Address. With the individual events now gone and everything summarized, I ran the second stats command to then summarize by the Session_ID. This does work except for one flaw. As stated above, there are only two Session_ID's contained within two events, each with their own _time field. Because I use the values() function of stats, both timestamps are printed as a multi-value field in each row of the stats table, and you're not able to tell which timestamp belongs to which Session_ID. I've tried different permutations of things, such as using mvexpand between the stats command to split the time_indexA(I'm not interested in charting the time for events from indexB) field back into individual events. I've also tried summarizing by time in the first stats command alongside the MAC/IP address. I attempted using eventstats as well, but it's not a command I'm very familiar with, so that didn't work either. And finally, when I do manage to make some progress with correlating each timestamp to its own event, so far, I've alwasy lost the hostname field from indexB as a byproduct. I've attached a picture of the table when run in case my explanation is subpar.   (index=indexA) OR (index=indexB) | rex field=text "AuditSessionID (?<SessionID>\w+)" | rex field=pair "session-id=(?<SessionID>\w+)" | eval time_{index}=strftime(_time,"%F %T") | eval ip_add=coalesce(IP_Address, assigned_ip), mac_add=coalesce(upper(src_mac), upper(mac)) | eval auth=case(CODE=45040, "True", true(), "False") | stats values(host_name) as hostname values(networkSwitch) as Switch values(switchPort) as Port values(auth) as Auth values(SessionID) as Session_ID values(time_indexA) as time by mac_add, ip_add | stats values(time) as time values(hostname) as hostname values(Switch) as Switch values(Port) as Port values(Auth) as Auth values(ip_add) as IP_Address values(mac_add) as MAC_Address by Session_ID  
@solg  As far as I know, you can send the TRAPS using HEC token or via Syslog. Kindly check the below add-on. This Add-on is intended to be installed on Splunk Search Heads or HF's and where Splunk ... See more...
@solg  As far as I know, you can send the TRAPS using HEC token or via Syslog. Kindly check the below add-on. This Add-on is intended to be installed on Splunk Search Heads or HF's and where Splunk HEC is configured for Proofpoint TRAP. As of now, there is no official Splunk Add-On specifically designed for integrating Proofpoint Threat Response Auto-Pull (TRAP) Cloud with Splunk. However, the "CCX Extensions for Proofpoint Products" app on Splunkbase includes a component named proofpoint:trap:hec, which is intended for integrating Proofpoint TRAP with Splunk. https://splunkbase.splunk.com/app/6339 
@solg It looks like there is nothing publicly available. We had to reach out to Proofpoint for the py script to get TRAP data in. It sounds like a question for ProofPoint.  You can download the APP ... See more...
@solg It looks like there is nothing publicly available. We had to reach out to Proofpoint for the py script to get TRAP data in. It sounds like a question for ProofPoint.  You can download the APP and related TA's here: App: https://splunkbase.splunk.com/app/3727/#/details Gateway TA: https://splunkbase.splunk.com/app/3080/ TAP TA: https://splunkbase.splunk.com/app/3681/  
TRAP Cloud has an API to export information, but there is no Add-On to integrate TRAP Cloud with Splunk Has anyone made this integration succesfully? Is there intention to implement a supported Add... See more...
TRAP Cloud has an API to export information, but there is no Add-On to integrate TRAP Cloud with Splunk Has anyone made this integration succesfully? Is there intention to implement a supported Add-On on Splunk to integrate TRAP Cloud?
We succesfully configured FortiWeb SaaS -> Splunk SSL syslog via inputs.conf [tcp-ssl:6514] index = <index> sourcetype = fwbcld_log disabled = 0 [SSL] requireClientCert = false
Thanks @bowesmana  , it worked for me.  Another question,  Is it possible to fetch only the latest record with latest END_TIME when we have multiple records with different END_TIME.  Currently, if... See more...
Thanks @bowesmana  , it worked for me.  Another question,  Is it possible to fetch only the latest record with latest END_TIME when we have multiple records with different END_TIME.  Currently, if there are 2 records with the different END_TIME for the same JOBNAME, we have 2 records.  Is it possible to display only 1 record per jobname with the latest END_TIME ?? 
Hi @Skv . which script? Cache is a normal feature of Splunk Forwarders. Ciao. Giuseppe
Hello @vvkarur , You can try this regex | rex field=_raw  "\"role\":\"(?<field_name>\w+)\"" Thanks!
Hi guys!  I've been struggeling for a while understanding metrics. When making a line chart for both average and max  value the trend is exact the same. This is the query: | mstats avg("% Proce... See more...
Hi guys!  I've been struggeling for a while understanding metrics. When making a line chart for both average and max  value the trend is exact the same. This is the query: | mstats avg("% Processor Time") as Avg, max("% Processor Time") as Max Where index="metric_index" AND collection=CPU AND host="host" span=1m | fields _time, Avg, Max But if I do avg and max of the value in the same time range I get two different values. Query used: | mstats avg("% Processor Time") as Avg, max("% Processor Time") as Max Where index="metric_index" AND "collection"="CPU" AND "host"="host" Earlier I had this data ingested as events and  then I had different trend for avg and max.. The inputs.conf file looks like this (using the Splunk_TA_windows app): ## CPU [perfmon://CPU] counters = % Processor Time disabled = 0 samplingInterval = 2000 stats = average; min; max instances = _Total interval = 60 mode = single object = Processor useEnglishOnly=true formatString = %.2f index = metric_index Are someone able to explain why this happens? Thanks in advance
Could you please share the script how it can used @gcusello 
Unfortunately, no.
Hi @jngo ,   I got exactly the same problem.  Have you found a solution to this situation ?   Thanks, Olivier
We are using http url with setting enableSplunkWebSSL = false in web.conf file. The host where i am trying to access splunk webrowser is a windows machine and the telnet i did is from the splunk ser... See more...
We are using http url with setting enableSplunkWebSSL = false in web.conf file. The host where i am trying to access splunk webrowser is a windows machine and the telnet i did is from the splunk server that is a linux machine which i am trying to access and its not accessible in url. below output from splunk server: sudo iptables -L [sudo] password for acnops_splunk: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:irdmi ACCEPT tcp -- anywhere anywhere tcp dpt:palace-6 ACCEPT tcp -- anywhere anywhere tcp dpt:distinct32 ACCEPT tcp -- anywhere anywhere tcp dpt:8089 ACCEPT tcp -- anywhere anywhere tcp dpt:distinct Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination [acnops_splunk@IEM******** ~]$ sudo firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: eth0 sources: services: dhcpv6-client ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: [acnops_splunk@IEM****** ~]$ looking forward for some solution
Hi @Bluekeeper , sorry but I don't understand your requirement: why do you want to do this? About your question: REST is used only for searching. About credentials, you could try to store them usi... See more...
Hi @Bluekeeper , sorry but I don't understand your requirement: why do you want to do this? About your question: REST is used only for searching. About credentials, you could try to store them using the encryption from Splunk, but I don't understand what you want to do. I can suppose that you whould modify some conf file in the deployment-apps folder of the Deployment Server, in this case the only solution is a script outside the Splunk web gui. Ciao. Giuseppe
What is it you are trying to achieve and why can you not do it using simple drilldowns?