All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Using the below query to get the daily avg user in during biz hours:  index=pan_logs sourcetype=json_no_timestamp metricname="field total user" |bin _time span=3h | stats latest(metricvalue) A... See more...
Using the below query to get the daily avg user in during biz hours:  index=pan_logs sourcetype=json_no_timestamp metricname="field total user" |bin _time span=3h | stats latest(metricvalue) AS temp_count by metricname _time | stats sum(temp_count) as "Users" by _time |eval Date=strftime(_time,"%m/%d/%y") |eval bustime=_time, bustime=strftime(bustime, "%H") |eval day_of_week = strftime(_time,"%A") |where ( bustime > 8 and bustime < 18) AND NOT (day_of_week="Saturday" OR day_of_week="Sunday") |eventstats avg(Users) as DailyAvgUsers by Date |eval DailyAvgUsers = round(DailyAvgUsers) |table Date day_of_week DailyAvgUsers but the query gives 3 counts per day  while i want only 1 for a day, when i change span to 6h , it gives me one count , but since i am counting only between 8AM to 6PM , it gives me no count when i run the search at 12PM Monday with 6h span.   How I can get one avg count per day? with time span = 3h   
Hi Experts , i want to show Column1 timestamp selected as default in Date/Time Range From not sure what i am doing wrong but when i select the different date its updating <fieldset submitButton=... See more...
Hi Experts , i want to show Column1 timestamp selected as default in Date/Time Range From not sure what i am doing wrong but when i select the different date its updating <fieldset submitButton="false" autoRun="false"> <input type="time" token="field2" searchWhenChanged="true"> <label>Column 1</label> <default> <earliest>1661144400</earliest> <latest>1661230800</latest> </default> <change> <eval token="timeRangeEarliestearliest">if(isnum($field2.earliest$), $field2.earliest$, relative_time(now(), $field2.earliest$))</eval> <eval token="timeRangeLatestearliest">if(isnum($field2.latest$), $field2.latest$, relative_time(now(), $field2.latest$))</eval> <eval token="prettyPrinttimeRangeFromTimeearliest">strftime($timeRangeEarliestearliest$, "%a, %e %b %Y")</eval> <eval token="prettyPrinttimeRangeToTimeearliest">strftime($timeRangeLatestearliest$, "%a, %e %b %Y")</eval> </change> </input> <input type="time" token="field1" searchWhenChanged="true"> <label>Column 2</label> <default> <earliest>@d</earliest> <latest>now</latest> </default> <change> <eval token="timeRangeEarliestlatest">if(isnum($field1.earliest$), $field1.earliest$, relative_time(now(), $field1.earliest$))</eval> <eval token="timeRangeLatestlatest">if(isnum($field1.latest$), $field1.latest$, relative_time(now(), $field1.latest$))</eval> <eval token="prettyPrinttimeRangeFromTimelatest">strftime($timeRangeEarliestlatest$, "%a, %e %b %Y")</eval> <eval token="prettyPrinttimeRangeToTimelatest">strftime($timeRangeLatestlatest$, "%a, %e %b %Y")</eval> </change> </input> </fieldset> <row> <panel> <html> <h3>Date/Time Range From</h3> <table> <tr> <td>From:</td> <td>$prettyPrinttimeRangeFromTimeearliest$</td> </tr> <tr> <td>To:</td> <td>$prettyPrinttimeRangeToTimeearliest$</td> </tr> </table> </html> </panel> </row> <row> <panel> <html> <h3>Date/Time Range</h3> <table> <tr> <td>From:</td> <td>$prettyPrinttimeRangeFromTimelatest$</td> </tr> <tr> <td>To:</td> <td>$prettyPrinttimeRangeToTimelatest$</td> </tr> </table> </html> </panel> </row>  
Hello guys. Im inherited an splunk enviromment and im kinda new to this, so i'm studying quite a lot.  In my scenario i have something like 100 Windows UF sending info to 01 Heavy Forwarder which ... See more...
Hello guys. Im inherited an splunk enviromment and im kinda new to this, so i'm studying quite a lot.  In my scenario i have something like 100 Windows UF sending info to 01 Heavy Forwarder which sends to 3 indexers and complete the proccess.  Now i want to filter these infos and i'm wondering if i can make a blacklist in the HF to filter these logs, if i can, which is the best way? - In local folder create a inputs.conf and changing from there? I've tried this one and i think it worked, the problem is the logs went to the main index and i could'nt figure out how to change it. - Create some filter in the Indexers? Thanks for the help so far.
How Can i just get the message alert in mail showing only the  failed job example "Job=[ADM-FILENET-DLY]] " instead of the complete log.   Note: The Job names are dynamic    My Current Alert Quer... See more...
How Can i just get the message alert in mail showing only the  failed job example "Job=[ADM-FILENET-DLY]] " instead of the complete log.   Note: The Job names are dynamic    My Current Alert Query : index=* host=*MYhost* "*IN-RCMCO-DLY*" OR "*ADJ-RECERT-DLY*" OR *AD*-*Y*" FAILED job_status2=FAILED OR status=FAILED OR status1=FAILED OR ExitCode=FAILED | rex field=_raw ".*status:\s\[(?P<status1>\S+)\]" | rex field=_raw "JOB\s(?P<job_status2>\w+)" |rex field=_raw "(exitCode=)(?<ExitCode>\w+)" | eval _raw=substr(_raw, 1, 1500) | table _time job_status2 status1 status ExitCode _raw   log  22-08-28 18:01:31,323 INFO [main] c.l.b.listener.JobCompletionListener: :::::::::::::::BATCH JOB FAILED:::::::::::JobExecution: id=21099, version=1, startTime=Sun Aug 28 18:01:29 CDT 2022, endTime=Sun Aug 28 18:01:31 CDT 2022, lastUpdated=Sun Aug 28 18:01:29 CDT 2022, status=FAILED, exitStatus=exitCode=FAILED;exitDescription=com.ltss.fw.exception.ApplicationException: Error occured while processing appDocument: In catch block, exception stackTrace,job=[JobInstance: id=21099, version=0, Job=[ADM-FILENET-DLY]], jobParameters=[{chunkSize=null, skipLimit=null, commitInterval=null, time=1661727689449, asOfDate=1661662800000}]
How can i rename the value of the policy name from = to "contains".  Instead of saying "index=tenable* sourcetype="*" policyName="*" | eval policyName=if(policyName="93e1da98-656c-5cd5-933b-ce6665fc... See more...
How can i rename the value of the policy name from = to "contains".  Instead of saying "index=tenable* sourcetype="*" policyName="*" | eval policyName=if(policyName="93e1da98-656c-5cd5-933b-ce6665fc0486-1948841/CIS PostgreSQL 11 (20210915)","PostgreSQL",policyName) "   I would like to say "if(policyName=*CIS PostgreSQL* it doesn't work
Just came across an interesting use case, and I'm wondering how people solve it.  Phantom talks to an internal asset via HTTP and API key. This asset has redundancy, and if it goes down a backup... See more...
Just came across an interesting use case, and I'm wondering how people solve it.  Phantom talks to an internal asset via HTTP and API key. This asset has redundancy, and if it goes down a backup comes online. Part of that is name re-direction. The data underneath is all the same but the API key changes.  My thought would be to perform a test connectivity check at the top of the playbook, and then pass the asset number down the playbook.  Is there a smarter way to handle this?  Thanks!
we have configured our server to send syslog log events to our SPLUNK collectors using syslog UDP port 514 we are not seeing the hostname listed in the ingested files. how do we get SPLUNK to displ... See more...
we have configured our server to send syslog log events to our SPLUNK collectors using syslog UDP port 514 we are not seeing the hostname listed in the ingested files. how do we get SPLUNK to display the hostname? thank you Angel
We have Monitoring of Java Virtual Machines with JMX setup on our Splunk forwarder (linux), and it's running fine when executed "./splunk start" from splunk forwarder bin with below logs. 08-29-202... See more...
We have Monitoring of Java Virtual Machines with JMX setup on our Splunk forwarder (linux), and it's running fine when executed "./splunk start" from splunk forwarder bin with below logs. 08-29-2022 09:33:57.733 -0600 INFO SpecFiles - Found external scheme definition for stanza="jmx://" from spec file="/opt/splunkforwarder/etc/apps/SPLUNK4JMX/README/inputs.conf.spec" with parameters="activation_key, config_file, config_file_dir, polling_frequency, additional_jvm_propertys, output_type, hec_port, hec_host, hec_endpoint, hec_poolsize, hec_token, hec_https, hec_batch_mode, hec_max_batch_size_bytes, hec_max_batch_size_events, hec_max_inactive_time_before_batch_flush, log_level"   However,  when I tried to start Splunk agent as a service with sudo service splunk start, everything else started fine, and I'm getting the following error in splunkd.log 08-29-2022 09:46:16.519 -0600 ERROR ModularInputs - Introspecting scheme=jmx: Unable to run "python3.7 /opt/splunkforwarder/etc/apps/SPLUNK4JMX/bin/jmx.py --scheme": child failed to start: No such file or directory 08-29-2022 09:46:16.542 -0600 ERROR ModularInputs - Unable to initialize modular input "jmx" defined in the app "SPLUNK4JMX": Introspecting scheme=jmx: Unable to run "python3.7 /opt/splunkforwarder/etc/apps/SPLUNK4JMX/bin/jmx.py --scheme": child failed to start: No such file or directory. Anyone can point me in the right direction? I've setup Splunk as a service with sudo ./splunk enable boot-start -user splunkuser I'm suspecting there is a mismatch in permission between splunkuser (splunk owner) and root, but not sure where I should go to correct that.  
Hi, Is there a way to authenticate to the API through SAML? right now, our security policy prohibits the use of local unmanaged accounts. I have SAML authentication with Azure AD configured for w... See more...
Hi, Is there a way to authenticate to the API through SAML? right now, our security policy prohibits the use of local unmanaged accounts. I have SAML authentication with Azure AD configured for web access, but when I try to use those same AD credentials to authenticate to the API it does not work. Please help with steps for configuring Azure AD to work with REST API in Splunk.  
Hello, i have to decommission a site due to datacenter dismission. Actually we have four sites with 10 indexers each. The  site decommission is well documented, what is not clear is how the map of ... See more...
Hello, i have to decommission a site due to datacenter dismission. Actually we have four sites with 10 indexers each. The  site decommission is well documented, what is not clear is how the map of decommissioned site originating data is replicated to the remaining site, using: site_mappings = site4:site2 originating data from site4 is replicated to site2, suppose there are 20TB of data, how many data every indexers on site2 receive ? Is there a sort af balancing (2TB each) or is not  predictable ? Is also not clear if the replication bucket for the dismissed site are removed by Splunk when the cluster master is restarted or can be do manually. I need this information to estimate if the actual size of file system is enough. Thanks  
Hi, How can we extract a list of open episodes in splunk itsi.Please  Thanks!
Hello, I have question about pipeline parallelization. From docu and other sources I find that is safe enable pipeline parallelization if I have plenty of free resources in Splunk deployment, parti... See more...
Hello, I have question about pipeline parallelization. From docu and other sources I find that is safe enable pipeline parallelization if I have plenty of free resources in Splunk deployment, particularly CPU cores. In other words, if CPU on indexers or heavy forwarders are "underutilized". But, my question is - what does it mean "underutilized" in numbers? Especially in distributed environment. Example: lets imagine I have IDX cluster. 8 nodes, 16 CPU cores each. I see in Monitoring console )historical charts) average CPU load 40%, median CPU load 40% and maximum CPU load between 70 - 100%. My opinion is it is not safe to enable parallelization in this environment, OK? But when it is safe - if maximum load is under 50% Or 25%? What factors I should take into calculations and what numbers are "safe"? Could you please share your experience or point me to some available guide? Thank you very much in advance. Best regards Lukas Mecir
In one of our dashboard we have a table with a custom action, When the user clicks on a field we check if it is the delete field and if so get the name of the field we want to delete. We can put it... See more...
In one of our dashboard we have a table with a custom action, When the user clicks on a field we check if it is the delete field and if so get the name of the field we want to delete. We can put it in a javascript variable. We also have a search that needs to use this variable. Something like: where someVariable is update in a function.   var someVariable = "" var validateChannelCanBeDeletedSearch = new SearchManager({  id: "validate something",  autostart: false,  search: `| inputlookup some | search some_field="${someVariable}"` });  Later we manually trigger the search. The problem is that the update value of someVariable is not used in the query. How can we make it use the updated value.
Hey there! I try do write some code which will interact with the Splunk REST API. I use the Splunk FREE edition version 8.2.3.3. Unfortunately I cannot get any response from port 8089:   ``` ... See more...
Hey there! I try do write some code which will interact with the Splunk REST API. I use the Splunk FREE edition version 8.2.3.3. Unfortunately I cannot get any response from port 8089:   ``` $ curl https://localhost:8089/services/search/jobs/ curl: (28) Operation timed out after 300523 milliseconds with 0 out of 0 bytes received ```   The URI does not matter. I cannot get any reaction whatsoever. Is this a known limitation? Or do I need to configure something? Thanks a lot for suggestions!
Hi,I have one query that we need to submit node downtime duration report based on node monthly.Every month how much time that node down and how much time it is up.Please help me with the query.Please... See more...
Hi,I have one query that we need to submit node downtime duration report based on node monthly.Every month how much time that node down and how much time it is up.Please help me with the query.Please find the sample log(100 is up ,200 is down) 08/29/2022 10:05:00 +0000,host="0.0.1.1:NodeUp",alert_value="100"              08/29/2022 10:05:00 +0000,host="0.1.1.1:NodeUp",alert_value="100" 08/29/2022 10:00:00 +0000,host="0.0.1.1:NodeDown",alert_value="200" 08/23/2022 10:10:00 +0000,host="0.0.1.1:NodeUp",alert_value="100"  08/23/2022 09:55:00 +0000,host="0.0.1.1:NodeDown",alert_value="200" Example:If node down for 30 min overall in a month different dates.still we need to display hostname along with dowtime(i.e 30min) and remaining uptime duration in one row Note:Every 5min our Saved search will run and show this log data like above so that time stamp is will be every 5min
Hello community, I have a problem with a search that does not return a result. For the purposes of a dashboard, I need one of my searches, when it does not return a result, to display 0. I have al... See more...
Hello community, I have a problem with a search that does not return a result. For the purposes of a dashboard, I need one of my searches, when it does not return a result, to display 0. I have already succeeded in this modification in some somewhat complex searches but for a fairly simple search, I cannot do it. Here is the example in question: Note that when I have a result, it is displayed well, my search runs correctly. I attempted to use the command "| eval ACKED = if(isnull(ACKED) OR len(ACKED)==0, "0", ACKED)" but search doesn't seem to read it:   I found several topics on similar subjects (with the use of fillnull for example) but without result :   I think it's not complicated but I can't put my finger on what's the problem, do you have any idea? Best regards, Rajaion
hello I have a strange behavior with an eval command if I am doing this it works well     | eval site=case(site=="0", "AA", site=="BR", "BB", site=="PER", "CC", 1==1,site) | eval s=lower(s... See more...
hello I have a strange behavior with an eval command if I am doing this it works well     | eval site=case(site=="0", "AA", site=="BR", "BB", site=="PER", "CC", 1==1,site) | eval s=lower(s) |search site="$site$"      but if I put | search site="$site$" just after the eval, the search command is not recognized as a splunk command!     | eval site=case(site=="0", "AA", site=="BR", "BB", site=="PER", "CC", 1==1,site) |search site="$site$"      what is wrong please?
Hi Team, I have NFR license, want to install the ITSI. was trying to install the app, in the process it routed to Splunk base and my splunk account authorization denied what to do some one please h... See more...
Hi Team, I have NFR license, want to install the ITSI. was trying to install the app, in the process it routed to Splunk base and my splunk account authorization denied what to do some one please help?
hello In a first dashboard, I have a dropdown list     <input type="dropdown" token="site" searchWhenChanged="true"> <label>Espace</label> <fieldForLabel>site</fieldForLabel> ... See more...
hello In a first dashboard, I have a dropdown list     <input type="dropdown" token="site" searchWhenChanged="true"> <label>Espace</label> <fieldForLabel>site</fieldForLabel> <fieldForValue>site</fieldForValue> <search>     so when I chosse a site value, the dashboard is updated with the selected site now I want to drilldown on another dashboard from the selected site like this     <link target="_blank">/app/spl_pu/test?form.$site$=$click.value$</link>     In the second dashboard, i try to call the token like this but it doesnt works     | search site="$site$"     could you help please?
Hi, We have a requirement to install the Splunk add on for sql server. We are using Splunk cloud with classic experience. Where all do we need to install this add on? is it sufficient to instal... See more...
Hi, We have a requirement to install the Splunk add on for sql server. We are using Splunk cloud with classic experience. Where all do we need to install this add on? is it sufficient to install on the search head? Or it has to be installed on the heavy forwarder also? Please clarify. Docs suggest to install on the search head only as the below table. Splunk instance type Supported Required Comments Search Heads Yes Yes Install this add-on to all search heads where Microsoft SQL Server knowledge management is required. Indexers Yes No Not required, because this add-on does not include any index-time operations. Heavy Forwarders Yes No To collect dynamic management view data, trace logs, and audit logs, you must use Splunk DB Connect on a search head or heavy forwarder. The remaining data types support using a universal or light forwarder installed directly on the machines running MS SQL Server. Universal Forwarders Yes No To collect dynamic management view data, trace logs, and audit logs, you must use Splunk DB Connect on a search head or heavy forwarder. The remaining data types support file monitoring using a universal or light forwarder installed directly on the machines running MS SQL Server.