All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Not sure where & how to address the below skipped job.  I would appreciate any guidance   Report Name Skip Reason (Skip Count) Alert Actions _ACCELERATE_DM_SA-IdentityManagement_Identity_Manage... See more...
Not sure where & how to address the below skipped job.  I would appreciate any guidance   Report Name Skip Reason (Skip Count) Alert Actions _ACCELERATE_DM_SA-IdentityManagement_Identity_Management.Expired_Identity_Activity_ACCELERATE_ The maximum number of concurrent historical scheduled searches on this instance has been reached (1) none
When I do some searches I get records which are very long and have no newlines. The browser (Firefox in my case) effectively freezes up. How can I avoid effectively locking up my browser when doing ... See more...
When I do some searches I get records which are very long and have no newlines. The browser (Firefox in my case) effectively freezes up. How can I avoid effectively locking up my browser when doing queries that might return such records?
I need a Single Value widget for a dashboard which displays current RAM usage in real-time. This is what I have so far in SPL: index=index host=host sourcetype=vmstat memUsedMB=* | stats count And... See more...
I need a Single Value widget for a dashboard which displays current RAM usage in real-time. This is what I have so far in SPL: index=index host=host sourcetype=vmstat memUsedMB=* | stats count And this is all I'm getting: How can I get something more like this? :  
We upgraded to 8.1.2 and want to use workload manager, workload manager requires systemd.  With 8.1.x you can allow the splunk user to stop/start the systemd splunk service, which works fine however ... See more...
We upgraded to 8.1.2 and want to use workload manager, workload manager requires systemd.  With 8.1.x you can allow the splunk user to stop/start the systemd splunk service, which works fine however it seems to be to broad of a configuration and also allows for stopping/starting other systemd services as well.  Is there a way to lock down the polkit rule where it doesn't grant beyond the splunk service?  I'll do more research on polkit to see if I can find a way but wondering if others have done this.   sh-4.2$ sudo /apps/splunk/bin/splunk enable boot-start -systemd-managed 1 -create-polkit-rules 1 -user splunk CAUTION: The system has systemd version < 237 and polkit version > 105. With this combination, polkit rule created for this user will enable this user to manage all systemd services.Are you sure you want to continue [y/n]? y Systemd unit file installed at /etc/systemd/system/Splunkd.service. Polkit rules file installed at /etc/polkit-1/rules.d/10-Splunkd.rules. Configured as systemd managed service. sh-4.2$ sudo su - splunk splunk@qasshd$ systemctl stop amazon-ssm-agent.service splunk@qasshd$ systemctl status amazon-ssm-agent.service ● amazon-ssm-agent.service - amazon-ssm-agent Loaded: loaded (/etc/systemd/system/amazon-ssm-agent.service; enabled; vendor preset: disabled) Active: inactive (dead) since Wed 2021-02-10 22:19:39 UTC; 7s ago Process: 1130 ExecStart=/usr/bin/amazon-ssm-agent (code=exited, status=0/SUCCESS) Main PID: 1130 (code=exited, status=0/SUCCESS) splunk@qasshd$ systemctl start amazon-ssm-agent.service splunk@qasshd$ systemctl status amazon-ssm-agent.service ● amazon-ssm-agent.service - amazon-ssm-agent Loaded: loaded (/etc/systemd/system/amazon-ssm-agent.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-02-10 22:19:55 UTC; 3s ago Main PID: 5087 (amazon-ssm-agen) Memory: 30.6M CGroup: /system.slice/amazon-ssm-agent.service ├─5087 /usr/bin/amazon-ssm-agent └─5101 /usr/bin/ssm-agent-worker splunk@qasshd$ This is our rules file: /etc/polkit-1/rules.d/10-Splunkd.rules polkit.addRule(function(action, subject) { if (action.id == "org.freedesktop.systemd1.manage-units" && subject.user == "splunk") { return polkit.Result.YES; } });
I cannot figure out how to round the values presented on the timechart. My SPL:   index=$radio_token$ host=$dropdown_token2$ sourcetype=cpu | eval cpuavg=round(cpu_load_percent, 2) | timechart av... See more...
I cannot figure out how to round the values presented on the timechart. My SPL:   index=$radio_token$ host=$dropdown_token2$ sourcetype=cpu | eval cpuavg=round(cpu_load_percent, 2) | timechart avg(cpuavg)   And these are the results:  How can I get that to appear as 1%, instead of that huge clunky number? 
I am trying to run two fields against one column using a lookup.  This SPL does not work, but conveys what I am trying to do. | lookup blacklist.csv ioc_list AS (src_ip OR dest_ip) OUTPUTNEW ioc_li... See more...
I am trying to run two fields against one column using a lookup.  This SPL does not work, but conveys what I am trying to do. | lookup blacklist.csv ioc_list AS (src_ip OR dest_ip) OUTPUTNEW ioc_list,  feedback   Is there a way I can do something like the above command without running two separate lookup commands?
Hello guys, Just noticed on preproduction environment local sslPassword on cluster member has not been updated when pushed new sslPassword. It seems it didn't trig restart which updates and encrypts... See more...
Hello guys, Just noticed on preproduction environment local sslPassword on cluster member has not been updated when pushed new sslPassword. It seems it didn't trig restart which updates and encrypts sslPassword. Splunk enterprise 7.3.4 Thanks.  
Hello Splunk Community! I am very new to Splunk, and SPL. My question is... If I have a dashboard of two panels (VulnScans, Firewall_Events) Would I be able to accomplish the following query (or an... See more...
Hello Splunk Community! I am very new to Splunk, and SPL. My question is... If I have a dashboard of two panels (VulnScans, Firewall_Events) Would I be able to accomplish the following query (or anything like it) in the 'Firewall_Events' panel: index=firewall  src_ip=List_of_IPs_from_table_in_VulScans  AND  src_port=List_of_Ports_from_table_in_VulScans   what I would like to achieve is to take both the vulnerable IPs and their associated vulnernable port (IP.252 AND port23, IP.224 AND port25,......) that were output from the query in the VulScans panel, and search them in the firewall events for any traffic to/from that IP AND to/from it's port for further investigation. Would I be able to AND each row or conjoin the IP and Port somehow to be seen as one item/field (IP1 AND Port1 as Asset1? Would I be able to OR each set; search for  IP.252 AND Port23  OR  IP.224 AND Port25, ..........and so forth?
Hello, I am getting this new warning on my dashboards and searches of a Security Essentials Lookup.  I have done btool and searched the lookup data.  Do not know what is happening or if their is an ... See more...
Hello, I am getting this new warning on my dashboards and searches of a Security Essentials Lookup.  I have done btool and searched the lookup data.  Do not know what is happening or if their is an associated .csv  any help would be appreciated.     
Hi, Last week I upgraded our Splunk servers to 8.1.2. Once completed I upgraded the Cisco Firepower Encore app to 4.0.10. Shortly after data stopped coming in. I did a good bit of troubleshooting in... See more...
Hi, Last week I upgraded our Splunk servers to 8.1.2. Once completed I upgraded the Cisco Firepower Encore app to 4.0.10. Shortly after data stopped coming in. I did a good bit of troubleshooting including new certs, clean install of the Cisco Encore App, and setting up as new.  I am currently to the point where I'm getting cisco:estreamer.log (Monitor) every 2 minutes. However, I am NOT getting anything to sourcetype cisco:estreamer:data. When I check the data folder under /splunkhome/etc/apps/TA-estreamer/data it is empty. I do a netstat to my firepower and a ton of data is coming from it. I just can't find where it's landing. I do a search for encore.*.log and nothing shows up.  Estreamer.conf has the correct information. Splencore.sh has been configured correctly, tests successfully, and is running. configure.sh looks correct.  It has this for the path --stream=relfile:///../../data/encore.{0}.log"  which kind've bothers me.   I'm stuck at the moment.  We are working on restoring a backup of the folder structure before I made the upgrade to have a comparison.  Any help would be appreciated. If any additional information is needed from me, let me know.
Hi all, In the last couple of days I've started receiving an error whenever I try to save a search as a report:   Does anyone have any suggestions about what might be causing this? Thanks mo... See more...
Hi all, In the last couple of days I've started receiving an error whenever I try to save a search as a report:   Does anyone have any suggestions about what might be causing this? Thanks monotonic
I have a Splunk server which is receiving data on a tcp-ssl port successfully for a particular application (SecureCircle). I'm trying to set up a new port to receive data from Palo Alto firewalls but... See more...
I have a Splunk server which is receiving data on a tcp-ssl port successfully for a particular application (SecureCircle). I'm trying to set up a new port to receive data from Palo Alto firewalls but it's running into an the following error:       WARN SSLCommon - Received fatal SSL3 alert. ssl_state='SSLv3 read client key exchange A', alert_description='certificate unknown'       I'm using the same certificate an SSL configuration for both ports so I know that the cert is fine. It's not a self singed cert. It's valid until 2022. I've been looking through some old posts with similar errors but none of them seemed to match my issue.  Below is my Port and SSL configuration from the btool inputs command       /opt/splunk/etc/apps/Splunk_TA_paloalto/local/inputs.conf [SSL] /opt/splunk/etc/system/default/inputs.conf _rcvbuf = 1572864 /opt/splunk/etc/system/default/inputs.conf allowSslRenegotiation = true /opt/splunk/etc/system/default/inputs.conf cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 /opt/splunk/etc/system/default/inputs.conf ecdhCurves = prime256v1, secp384r1, secp521r1 /opt/splunk/etc/system/local/inputs.conf host = splunkhost.mydomain.com /opt/splunk/etc/system/default/inputs.conf index = default /opt/splunk/etc/apps/Splunk_TA_paloalto/local/inputs.conf requireClientCert = false /opt/splunk/etc/apps/Splunk_TA_paloalto/local/inputs.conf serverCert = /opt/splunk/etc/auth/splunkhost.mydomain.com/splunkhost.mydomain.com.pem /opt/splunk/etc/apps/Splunk_TA_paloalto/local/inputs.conf sslPassword = [Redacted] /opt/splunk/etc/system/default/inputs.conf sslQuietShutdown = false /opt/splunk/etc/apps/Splunk_TA_paloalto/local/inputs.conf sslVersions = tls1.2 /opt/splunk/etc/apps/Splunk_TA_paloalto/local/inputs.conf [tcp-ssl://6514] /opt/splunk/etc/system/default/inputs.conf _rcvbuf = 1572864 /opt/splunk/etc/apps/Splunk_TA_paloalto/local/inputs.conf host = splunkhost.mydomain.com /opt/splunk/etc/apps/Splunk_TA_paloalto/local/inputs.conf index = pan_logs /opt/splunk/etc/apps/Splunk_TA_paloalto/local/inputs.conf sourcetype = pan:log       The configuration for the working port is:       /opt/splunk/etc/apps/ahs_ta_securecircle/local/inputs.conf [tcp-ssl://8443] /opt/splunk/etc/system/default/inputs.conf _rcvbuf = 1572864 /opt/splunk/etc/system/local/inputs.conf host = splunkhost.mydomain.com /opt/splunk/etc/apps/ahs_ta_securecircle/local/inputs.conf index = dlp /opt/splunk/etc/apps/ahs_ta_securecircle/local/inputs.conf sourcetype = SecureCircle      
Hello, This is a follow up post to my recent post on "Trouble with Hidden Panel Passing Value". I am having an issue with the safety ticker that I had implemented in that question. For some reason, ... See more...
Hello, This is a follow up post to my recent post on "Trouble with Hidden Panel Passing Value". I am having an issue with the safety ticker that I had implemented in that question. For some reason, the only result being posted is <news ticker>. The code is below. For some reason this code was working but now is not. The point of the search is to produce one result that shows the last safety incident but only in the last 48 hours. The ticker itself uses a marquee to show the results.  <form> <search> <query> index=defmfg_safety work_center="S1**" | sort 0 -_time | dedup id | head 3 | stats max(corrective_actions{}) as corrective_action by investigation_result | eval corrective_action=if(corrective_action="30 day follow up" OR corrective_action="6 month follow up","PENDING",corrective_action) | eval result=investigation_result +" -CORRECTIVE ACTION- "+ corrective_action | eval ticker=result | eval length=ceil(len(ticker)/2) . "ms" </query> <earliest>-48h@h</earliest> <finalized> <condition match="$job.resultCount$ == 0"> <unset token="ticker"></unset> <unset token="ticker_result"></unset> </condition> <condition> <set token="ticker">$result.ticker$</set> <set token="ticker_result">$result.result$</set> </condition> </finalized> </search> <row depends="$ticker$"> <panel> <html> <style> #marquee { font-size: 30px; color: white; height: 45px; white-space: nowrap; line-height: 60px; } h2 { font-size: 30px !important; text-align: center; padding: 5px !important; color: red; } </style> <h2>SAFETY ALERT</h2> <marquee scrollamount="19" id="marquee">$ticker$</marquee> </html> </panel> </row> </form>
I need to extract two characters to a new field. The query I have written is able to remove the first 6 characters and it retrieves from 7th character till the end.  Data: {         type: Iden... See more...
I need to extract two characters to a new field. The query I have written is able to remove the first 6 characters and it retrieves from 7th character till the end.  Data: {         type: Identifier        value: 1234567890ABCDEF }   {\"type\":\"Identifier\",\"value\":\"(?:.{6})(?<idCode>.*?)\"}   The query returns "7890ABCDEF" whereas I need only "78". Could someone please help me to fix the issue?  
Hi All, Can you please help me with my problem? I would like to check all the hosts in the CSV file which are for some reason truncated due to too many records.  I have modified the search which was... See more...
Hi All, Can you please help me with my problem? I would like to check all the hosts in the CSV file which are for some reason truncated due to too many records.  I have modified the search which was provided on the other posts by some good soul   Here is my search:     | inputlookup my_lookup_definition | join type=left [metadata type=hosts] |dedup host lastTime firstTime | eval age = now()-lastTime | convert ctime(lastTime) | eval field_in_ddhhmmss=tostring((age) , "duration") |rename field_in_ddhhmmss as "Time Offline" lastTime as "Last Time" | sort + "lastTime" | table host "Time Offline" "Last Time"     My main goal is to search all hosts from the CSV file, check which one of them have been reporting to Splunk and which ones have stopped.  The above search would do the trick, but the logs are truncated Is there any other way to achieve my goal without modifying the config files?        [subsearch]: Subsearch produced 100000 results, truncating to maxout 50000. [subsearch]: Metadata results may be incomplete: 100000 entries have been received from all peers (see parameter maxcount under the [metadata] stanza in limits.conf), and this search will not return metadata information for any more entries.       I would be very grateful for your assistance here.     Kind regards, Diirn
The intention of this correlation search is to find all new local admin accounts on end user devices. Problem is, when using WinEventLog:Security EventCode 4732, a good number of the users have "-" a... See more...
The intention of this correlation search is to find all new local admin accounts on end user devices. Problem is, when using WinEventLog:Security EventCode 4732, a good number of the users have "-" as the user name and only provide the SID. We attempted to alleviate the issue by looking up the user name by using WinEventLog:Security EventCodes 4720 and 4738 along side using a index that populates our active directory (MSAD) information. Most of the searches return a user name, but, not all. When I attempt to search for the SID information in the active directory index (MSAD), the search completes successfully, but, the same information is not pulled from the correlation search. Below is my current search. index=wineventlog eventtype=wineventlog_security EventCode=4732 Group_Name=Administrators | eval user_sid=mvindex(Security_ID,1) | join type=left user_sid [search index=wineventlog eventtype=wineventlog_security EventCode=4720 OR EventCode=4738 | eval user_sid=mvindex(Security_ID,1)] | join type=left user_sid [search index=msad | eval user_sid=objectSid | rename name as user]
hi I try to remove the comma in my number but it doesnt works Could you help me please? | rex field=count mode=sed "s/,/./g" | stats count(HOSTNAME)  
We have a search that is monitoring and reporting website usage by users over time.  Our customer base is 4K+,  most are mobile users so IP changes frequently.  Monthly activity would generate approx... See more...
We have a search that is monitoring and reporting website usage by users over time.  Our customer base is 4K+,  most are mobile users so IP changes frequently.  Monthly activity would generate approximately 10K+ transactions with pertinent data points being:       email                 IP address               Session Start                 Session Stop                Duration(mins)     Actions cxxx@yyy.com 192.168.1.3    2021/01/28 10:30:43     2021/01/28 10:34:32           3.82                    18 We would like to automate the process to identify email and IP transactions that are concurrent; i.e. same user different IP's,  running at the same time.   Any suggestions?
Hello SPLUNK Community! I need to do some Excel analysis on the Episodes in ITSI, breaking them up by various parameters. I might be able to create a SPLUNK dashboard to do this sort of thing, but... See more...
Hello SPLUNK Community! I need to do some Excel analysis on the Episodes in ITSI, breaking them up by various parameters. I might be able to create a SPLUNK dashboard to do this sort of thing, but I think management is going to want it in Excel ultimately. Making a report off of raw searches is a cake walk, but is there any way to make a report of the EPISODES in ITSI? Basically what I am doing now is literally highlighting and copying the episodes off our ITSI Episode Review and pasting them into excel, getting me all the data from the columns like assignee, time/date, etc. I know there is a way to do this more programmatically Or am I barking up the wrong tree here? Do I need to generate reports for each correlation search instead? Thanks in advance!
Agents for old versions of windows. I have a client which has some devices with versions of windows 2012 and 2008 On the agent download splunk page there is a link where previous versions can be fo... See more...
Agents for old versions of windows. I have a client which has some devices with versions of windows 2012 and 2008 On the agent download splunk page there is a link where previous versions can be found When I enter this link I see that there are some agents for these versions of windows My question is why the first column shows a version of splunk, and I don't know if that agent only works for that version of splunk enterprise?