All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to run scripted input on the search peer? Also, is it possible to ensure it runs from all search peers ? Thanks ahead of time. 
I've got a query I want to run on a daily basis, and write the results to a lookup (# of results once per day) then, I want to be able to query that lookup to pull the last 7 days counts. Is this... See more...
I've got a query I want to run on a daily basis, and write the results to a lookup (# of results once per day) then, I want to be able to query that lookup to pull the last 7 days counts. Is this possible? Is there a better way? I have a lookup file of IDS exclusions I am constantly updating and I want to be able to see how many results from the search I had each day; if I run the search at the end of the 7 days it wont be accurate because it would be against the lookup after 7 days of updates, so if I had 20 results on Monday and put something in the lookup that excluded those 20, I wouldn't have visibility when I ran it the next day since the lookup would exclude those 20 results. I was thinking if I could store the count somewhere each day and query that later, I wouldn't need to run anything against the exclusions lookup, I could just pull the historical counts I wrote.   Sorry if I am overcomplicating this, I'm new to Splunk so if there is a better way to do it please let me know!
We have been using Microsoft Azure App for Splunk for some time however recently it has started throwing the 404 page not found error. I tried rolling back to a previous version but still getting the... See more...
We have been using Microsoft Azure App for Splunk for some time however recently it has started throwing the 404 page not found error. I tried rolling back to a previous version but still getting the same thing. This is also happening with Microsoft 365 App for Splunk. Both were working but I can't figure out what has changed to cause them to throw the 404. CentOS 7 Splunk Enterprise 8.2.6 Thanks! Aaron
We have just started using the IT Essentials App, we are generating alarms based on thresholds being breached, the thresholds only seem to be available when for example a CPU peaks at 90%, what i am ... See more...
We have just started using the IT Essentials App, we are generating alarms based on thresholds being breached, the thresholds only seem to be available when for example a CPU peaks at 90%, what i am looking for is generating an alarm for when CPU peaks at 100% for a period of 10 mins.   Below is my spl, would using time_window = 15m suffice ?    | mstats max(ps_metric.pctCPU) as val WHERE index = em_metrics OR index = itsi_im_metrics by host span=5m | eval val=100-val | rename host as host | eval host="host=".$host$ , id="ta_nix" | lookup itsi_entities entity_type_ids as id _itsi_identifier_lookups as host OUTPUT _key as entity_key, title, _itsi_informational_lookups as info_lookup, _itsi_identifier_lookups as alias_lookup | search entity_key != NULL | eval entity_type="Unix/Linux Add-on" | eval metric_name="CPU Usage Percent" | eval itsiSeverity=case(val <= 75, 2, val <= 90 and val > 75, 4, val > 90, 6) | eval itsiAlert=metric_name." alert for ".entity_type." entity type" | eval itsiDrilldownURI="/app/itsi/entity_detail?entity_key=".entity_key | eval itsiInstance=title | eval entity_title=title | eval itsiNotableTitle=title | eval val = round(val, 2) | eval itsiDetails = metric_name + " current value is " + val | eval sec_grp=default_itsi_security_group | eval alert_source="entity_type" | where IsNull(is_entity_in_maintenance) OR (is_entity_in_maintenance != 1) | fields - host  
Hi All, My query is if we put indexed_time=json in props.conf at HF where we are ingesting events via HEC input. And put KV_mode=none in props.conf on SH. Will it extract any custom field during SH... See more...
Hi All, My query is if we put indexed_time=json in props.conf at HF where we are ingesting events via HEC input. And put KV_mode=none in props.conf on SH. Will it extract any custom field during SH or not? Thanks in Advance
On the page "Configure data collection using a REST API call" there is a section about adding setup parameters. However, on the shell input page "Configure data collection using a shell command" ther... See more...
On the page "Configure data collection using a REST API call" there is a section about adding setup parameters. However, on the shell input page "Configure data collection using a shell command" there is no such section. There is a section about adding input parameters, but it's not the same. The reason I'm asking is because I'm trying to add setup parameterts to a shell input, and I just get error messages in the final validation page, no matter what I do. Should it be the same syntax as for REST-inputs, or is it different for shell inputs? See attached screenshot of what I'm trying to do. I've already tried the following versions of the parameter syntax, but I get the same error messages for all of them,  and yes, I've added values to the parameters (in the Add-on Setup Parameters tab).   ${__settings__.additional_parameters.my_parameter} ${additional_parameters.my_parameter} ${my_parameter}   Also, I get it to work if I switch to input parameters instead, but in this case I want to use setup parameters, as I'm planning to re-use the parameters in several inputs.
Hi,  we have some data that contains a hierarchy of folders that we want to extract from the source path, the raw data looks like this :  source= /usr/local/intranet/areas/ua1/output/MUN we would... See more...
Hi,  we have some data that contains a hierarchy of folders that we want to extract from the source path, the raw data looks like this :  source= /usr/local/intranet/areas/ua1/output/MUN we would like to create 2 regex to extract the "intranet" and the "output" Can someone please help Thanks
Hi all, We are trying to show the bytes/s, averaged over 15 mins.  I'm getting far lower results if I use per_second than a live timechart with a span of 1s, so: index="datafeed" | where isnotnull(... See more...
Hi all, We are trying to show the bytes/s, averaged over 15 mins.  I'm getting far lower results if I use per_second than a live timechart with a span of 1s, so: index="datafeed" | where isnotnull(bytes) | timechart span=15m per_second(bytes) Gives an average of 10mb/s Whereas: index="datafeed" | where isnotnull(bytes) | timechart span=1s sum(bytes) Shows the data constantly hovering around the 100mb/s mark, so the 15 min average must be up at that level.  Am I missing something obvious?   Thanks for any pointers!
Hi, We are trying to install the Splunk in D drive as we are having some storage issue in C drive. We are unable to modify the path for installation from C to D. Please help to overcome this issue.... See more...
Hi, We are trying to install the Splunk in D drive as we are having some storage issue in C drive. We are unable to modify the path for installation from C to D. Please help to overcome this issue.   Thanks, Developer.
Hi All,  I am trying to create a summary index that runs once in a week and I want only few fields to be populated in the summary Index.  Questions : 1) I want only three fields in Summary I... See more...
Hi All,  I am trying to create a summary index that runs once in a week and I want only few fields to be populated in the summary Index.  Questions : 1) I want only three fields in Summary Index - Test1 , Test2, Test3.              Can I use table command on these 3 fields  at end of my query and create a report to populate              Summary Index ? If I use fields command, it is not showing the above fields in my  Index ? Why is it ?  I want these fields to be in SI so that I can run different stats command and make use in my dashboard. 2) Also, I have used timerange of last  7 days  ( to summary index last 7 days data) but only first 3 days data is being written to SI ? I dont see any errors ? I googled this question but I am not getting exact answer, Can anyone please help me to understand this please.  Thanks in Advance. Newbie to Splunk
Hi guys, This is the first time I'm trying to install splunk universal forwarder (8.2.2.1) on an AIX (7.1) machine.  I have no previous experience with AIX, but have installed many on Linux/Unix m... See more...
Hi guys, This is the first time I'm trying to install splunk universal forwarder (8.2.2.1) on an AIX (7.1) machine.  I have no previous experience with AIX, but have installed many on Linux/Unix machines. The issue I seem to get is when I'm done installing, and splunk prompts for a user/password, the entire process hangs after I've input the username. The only way out is to kill the PID or exit the machine. Steps taken:  I have downloaded the tgz file, expanded the tar with: gunzip -c "filename.tar.gz" | tar -xvf, to the /opt folder Changed ownership with : chown -R splunk:splunk /splunk/splunkforwarder Switched to splunk user to install : su - splunk Run on $/SPLUNK_HOME/bin:  ./splunk start --accept license Select Y Enter username, and this is where is  stops   Any suggestions would be kindly appreciated   
Hi Team. I have a big ol search that tables a bunch of resource usage data. Now i smack and outputcsv on that badboy, and schedule it to run once a month. Before it runs next month i would like... See more...
Hi Team. I have a big ol search that tables a bunch of resource usage data. Now i smack and outputcsv on that badboy, and schedule it to run once a month. Before it runs next month i would like to run the search again , drag in the old search with inputcsv and then compare the two, and maybe only list the changes (And maybe how much it changes?)     (index="redacted" OR index="redacted2") EventCode=1011 | rex field=Message "\W(?<ServerName>\S+)\s\w+\W(?<PowerState>\S+)\s\w+\W(?<CpuCount>\S+)\s\w+\W(?<CoresPerSocket>\S+)\s\w+\W(?<GuestHostName>\S+)(:)(?<GuestOS>.+)(MemoryMB)\W(?<MemoryMB>\S+)\s\w+\W(?<ResourcePool>.+)(Version)\W(?<Version>\w+)\s\w+\W(?<UsedSpaceGB>\S+)\s\w+\W(?<ProvisionedSpaceGB>\S+)\s\w+\W(?<VMHost>\S+)\s\w+\W(?<Folder>.+)" | eval UsedSpaceGB = round(UsedSpaceGB,2) | eval ProvisionedSpaceGB = round(ProvisionedSpaceGB,2) | search VMHost="***" | table ServerName PowerState CpuCount CoresPerSocket GuestHostName GuestOS MemoryMB ResourcePool Version UsedSpaceGB ProvisionedSpaceGB VMHost Folder | dedup ServerName | search ServerName="*" | search VMHost="*" PowerState="*" ResourcePool="redacted "| outputcsv redacted_filename.csv     New search: inputcsv redacted_filename.csv lists the old search just fine, except it sorted the tablenames alphabetically, but whatever. Is there an easy way to compare the two, or will i have to extract all fields and compare manually?
  I have table visualization which contains the details of name, course and other details. when am clicking on the any value in the name column that value should pass through the url and it opens t... See more...
  I have table visualization which contains the details of name, course and other details. when am clicking on the any value in the name column that value should pass through the url and it opens the other dashboard to get the entire details of that name.  We have tried the $row.name.value$,$value$ and other syntax as well but no luck. can any one help me here. Set token in source code, passed the values in url too but not taking the value. app/search/dashboarddetails?form.name=$value$ app/search/dashboarddetails?form.name=$row.name.value$ etc., And do we have any constraints that only 2 or 3 token will be passed in the url?
Hello! Splunk newbie here - I was hoping to get some advice on how to condense this search query I have. Is there another command I can use that will make it so I don't need to have so many eval stat... See more...
Hello! Splunk newbie here - I was hoping to get some advice on how to condense this search query I have. Is there another command I can use that will make it so I don't need to have so many eval statements? What I'm trying to do with the data I have is remove any results that contain the words Okta, FIDO, Google, and Voice so I'm left with the users that have the MFA factors Password and SMS authentication. Thanks in advance!     source="test.csv" sourcetype="csv" | stats values("MFA Factor") as MFA, values("Last Enrolled_ISO8601") as "Last Enrolled", values( "Last Used_ISO8601") as "Last Used" by User, Login | eval MFA= if(like(MFA,"Okta%"),null, MFA) | eval MFA= if(like(MFA,"%FIDO%"),null, MFA) | eval MFA= if(like(MFA,"Google%"),null, MFA) | eval MFA= if(like(MFA,"Voice%"),null, MFA) | where isnotnull(MFA)      
Hello, We need to configure TA-mailclient, but having a separate account (username and password) to the mailbox. Do you know by any chance what parameter should we add to our inputs.conf?  Unfortu... See more...
Hello, We need to configure TA-mailclient, but having a separate account (username and password) to the mailbox. Do you know by any chance what parameter should we add to our inputs.conf?  Unfortunately, changing the format of mail to  [mail://<username>\<mailaddress>] or adding new parameter to the input stanza do not help.  W tried: [mail://thisisourmail.com] attach_message_primary = False host = host1 include_headers = True index = mail mailbox_cleanup = readonly mailserver = mailserver.zz maintain_rfc = False username = mail_username password = encrypted protocol = IMAP disabled = 0 and  [mail://mail_username\thisisourmail.com] attach_message_primary = False host = host1 include_headers = True index = mail mailbox_cleanup = readonly mailserver = mailserver.zz maintain_rfc = False password = encrypted protocol = IMAP disabled = 0 Best regards, Justyna  
Hi,  I want to get integrate CIsco ESA logs with splunk. we have syslog collector where UF is installed. Can anyone please help me with the documentation of integration.  
Hi, I have requirement where i need to configure the UF to send the data to two different deployment servers or in other terms to two different Splunk enterprise. We are doing this because the ap... See more...
Hi, I have requirement where i need to configure the UF to send the data to two different deployment servers or in other terms to two different Splunk enterprise. We are doing this because the application team data needs to be sent to two different project 'Splunk enterprise' and here one Splunk enterprise needs audit logs and other Splunk enterprise needs Infrastructure data. Based on compliance with Company Security Policy ,Each Splunk enterprise should have the control to manage their own logs while having control over their Deployment servers. Hence please let me know  if there is any approach where i am able to configure two deploymentclient.conf in one UF and send data to two different deployment servers.   Thank You! 
I want to hide columName from 2nd row onwards for below table <row> <panel> <title>STATS : SLI/SLO Dashboard count</title> <table> <search base="pubsubLatencyHighAckDelayDFBaseSearch"> <query>... See more...
I want to hide columName from 2nd row onwards for below table <row> <panel> <title>STATS : SLI/SLO Dashboard count</title> <table> <search base="pubsubLatencyHighAckDelayDFBaseSearch"> <query> | stats values(serviceName) as serviceName count(eval(error=="failure")) as failureCount count(eval(error=="warning")) as warningCount</query> </search> </table> <html depends="$dontshow$"> <style> #tableWithHiddenHeader1 thead{ visibility:hidden; display:none; } </style> </html> <table id="tableWithHiddenHeader1"> <search base="dfLatencyOverallProcessingDelayBaseSearch"> <query> | stats values(serviceName) as serviceName count(eval(error=="failure")) as failureCount count(eval(error=="warning")) as warningCount</query> </search> </table> </panel> </row>   However when I am using visibility:hidden; display:none;  Alignment is bad, Attach is the screenshot
Hello, I have verified that sourcetype=aws:config is being ingested from AWS according to https://docs.splunk.com/Documentation/AWS/6.0.3/User/Topology. Still, nothing shows up under the Topology ta... See more...
Hello, I have verified that sourcetype=aws:config is being ingested from AWS according to https://docs.splunk.com/Documentation/AWS/6.0.3/User/Topology. Still, nothing shows up under the Topology tab. The troubleshooting documentation references this:  check that the Config: Topology Data Generator saved search is enabled I've looked for this saved search and it doesn't exist. I found other references on the Community saying that the Topology Data Generator search isn't found. Is that an error on Splunk's part for putting that in the documentation or am I missing something? Any help is greatly appreciated!   V/r, mello920
I am newbie to splunk and need to configure Palo Alto splunk and looking for corrective action. What I did so far 1. Rsyslog allowed port 1514. --> sudo semanage port -a -t syslogd_port_t -p tcp 1... See more...
I am newbie to splunk and need to configure Palo Alto splunk and looking for corrective action. What I did so far 1. Rsyslog allowed port 1514. --> sudo semanage port -a -t syslogd_port_t -p tcp 1514 2. Ran the Firewall command to allow port 1514. --> sudo firewall-cmd --zone=public --permanent --add-port=1514/tcp 3. From Deployment server in serverclass.conf i created app name and enabled and reloaded deploy server just like other appliance app like barracuda and cisco.       3. Install the Splunk_TA_paloalto on heavy forwarder. UI interface configuration is empty as I dont see any information that can be added. after reading few blog came out with below input.conf stanza Ver 1 and Ver 2     ------Outcome----- Logs are being ingested using cisco index because cisco is monitoring the file path /*.log where i have provided the suitable stanza version 1 (not sure if it is working fine, Please note log folder names are in capital (/var/log/remote/ABC-FW01-DOMAIN.COM/1,2022LOG") Logs are going to cloud from the remote folder, but not through palo alto app and so cloud base PA app wont be able to read it out.   please guide for correction..