All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am creating a search that detects compliance received from palo alto signatures we are receving 4 sets of dates: app-release-date av-release-date wildfire-release-date threat-release-date one... See more...
I am creating a search that detects compliance received from palo alto signatures we are receving 4 sets of dates: app-release-date av-release-date wildfire-release-date threat-release-date one of these dates (app-release-date) does not get updated daily, meaning  if today's date is 5/20/2021 the last updated release for the app-release date could be 4/20/2021 Now creating a pie chart comparing today's date, it will show that the app-release-date is out of date by 30 days but that is not the case, it just means that the most recent date for app-release-date is dated 4/20/2021. The question is how will I use the 4/20/2021 in an "eval=case" condition and using the 4/20/2021 as the most recent date instead of "now()" conditions For your perspective this is what I've done if using the "now()" conditions as a variable: | eval av-release-date=round(strptime('av-release-date', "%Y-%m-%d %H:%M:%S")), today=now(), timediff=today-'av-release-date', chart_date=strftime('av-release-date', "%Y-%m-%d") | eval color=case(timediff<=86400, "within 24 hrs", timediff>86400 AND timediff<=259200, "within 72 hrs", timediff>259200 AND timediff<=604800, "within 168 hrs", timediff>604800, "over 168 hrs") | stats count by color this returns a chart that look like this: The app-release-date conditions will be: The most recent = Green -----> the most recent is not "now()" but it could be 4/20/2021 Most recent – 7 days = yellow Most recent – 30 days = red Most recent  > 30 days = black Please advise, and thank you in advance. Regards,  
hi I have a table as shown below. I want to get the % of total for each status for previous 6 days. How do i write a query to get the same. DATE Status_1 Status_2 Status_3 2021-05-19 14 3... See more...
hi I have a table as shown below. I want to get the % of total for each status for previous 6 days. How do i write a query to get the same. DATE Status_1 Status_2 Status_3 2021-05-19 14 33 123 2021-05-18 45 12 456 2021-05-17 4 6 213 2021-05-16 5 8 564 2021-05-15 4 9 987 2021-05-14 4 0 543    
Hi all. We currently use Alert manager to annotate apps, and for several of them we have a drilldown that inputlookups a lookup table, edits it, then outputlookup it after. This means the team can u... See more...
Hi all. We currently use Alert manager to annotate apps, and for several of them we have a drilldown that inputlookups a lookup table, edits it, then outputlookup it after. This means the team can use drilldowns to verify activity from users or suppress notifications for example. Due to a small (but inevitable) incident where a lookup table was erased, we are now looking to utilise the Lookup Editor app so as to have some sort of version control. However looking at it, it seems that version control is only maintained if the table is edited in the Lookup Editor app itself? Does this mean that drilldowns will not cause a backup to be made, and instead we'll have to have a link to this table in the app instead?  If so thats fine, but can values from an alert be parsed through to edit fields already? Or would any modifications to the tables have to be copy/pasted? Thanks in advance
Hi I need to extract hostname or IP address from raw log.  My log looks like below: somerandometest  host: abc@email.com \r\n someothertext   I wrote rex as "Host:.(?<Host>.*?)(\\r)"     this ... See more...
Hi I need to extract hostname or IP address from raw log.  My log looks like below: somerandometest  host: abc@email.com \r\n someothertext   I wrote rex as "Host:.(?<Host>.*?)(\\r)"     this works great in regex site, but when I put it in splunk search it does not work. Please let me know if anything need to be updated.  I'm extracting text between host to till \r. here extracted field can be hostname or IP address hence taking till "\r"   Thanks in advance!
I'm trying to calculate the variance and delta between a multivalue field that contains epoch timestamps. The purpose is to determine the interval between web requests for a system to a specific doma... See more...
I'm trying to calculate the variance and delta between a multivalue field that contains epoch timestamps. The purpose is to determine the interval between web requests for a system to a specific domain/url. The mvfield (event_time) will contain at most 100 values. 
Hi, I'm trying to activate the Splunk Cloud Platform trial and keep getting the following error: "We're sorry, an internal error was detected when creating the stack. Please try again later."  I'v... See more...
Hi, I'm trying to activate the Splunk Cloud Platform trial and keep getting the following error: "We're sorry, an internal error was detected when creating the stack. Please try again later."  I've seen that it's a common issue, but couldn't find a solution.  Anybody from the Splunk team here to help? Thanks!
Hi all, I want to create a monitoring stanza that comnines the below log paths [monitor:///opt/tomcat/logs/localhost_access_log*.log] [monitor:///opt/rh/jws5/root/usr/share/tomcat/logs/localhost_a... See more...
Hi all, I want to create a monitoring stanza that comnines the below log paths [monitor:///opt/tomcat/logs/localhost_access_log*.log] [monitor:///opt/rh/jws5/root/usr/share/tomcat/logs/localhost_access_log*.log] [monitor:///opt/prozone/tas-community-7.6-1/multiserver/logs/localhost_access_log*.log] [monitor:///opt/apache-tomcat*/logs/localhost_access_log*.log] [monitor:///opt/atlassian/jira/logs/localhost_access_log*.log] Will something like this work for all? [monitor:///opt/../.../logs/localhost_access_log*.log] [monitor:///opt/../.../.../logs/localhost_access_log*.log]   What would be the best solution that you guys would propose?   Thank you   O.
Currently LDAP authentication is configured through an app on search heads and managed via deployment server. However, we are still using local admin for other servers. I tried to add other Splunk s... See more...
Currently LDAP authentication is configured through an app on search heads and managed via deployment server. However, we are still using local admin for other servers. I tried to add other Splunk servers to a serverclass that has the same app for LDAP authentication. Still its not working. First I tried checking for bindDN and bindpassword in authentication.conf but it seems its configured as "anonymous" Next, I tried using the instructions from Splunk manual, the result is below ber_get_next failed Can't contact LDAP Server (-1)   Please help fix this issue. Thanks!
Hello team,   I am trying to ignore the value "Total" if its concurrent Os_type matches "Linux"   Below is what I tried. |search DataType=Executive_Summary | search OS_Type=Linux AND OS_SubType!... See more...
Hello team,   I am trying to ignore the value "Total" if its concurrent Os_type matches "Linux"   Below is what I tried. |search DataType=Executive_Summary | search OS_Type=Linux AND OS_SubType!=Total | chart values(Servers_Skipped_Patching) as Skipped values(Servers_Failed_Patching) as Failed values(Servers_Successfully_Patching) as Successful by "OS_Type" "OS_SubType"   However, as I am also getting the value OS_SubType=Total from OS_Type=Windows.   Please let me know how I may ignore the "Total" only from Linux and not from any other OS_Type.
I am trying to use fillnull_value with Tstats like it is stated in the documentation, but it is not working as desired as it's not giving null values.   | tstats summariesonly=true allow_old_summar... See more...
I am trying to use fillnull_value with Tstats like it is stated in the documentation, but it is not working as desired as it's not giving null values.   | tstats summariesonly=true allow_old_summaries=true fillnull_value="NULL" count FROM datamodel=Linux_System.Linux_System  WHERE (Linux_System.src_host=*) by src_host
Hi,  I have a csv file that is updated by a script once a minute.  The output is similar to:  time,queuename,vpn,last-message-id-spooled,max-message-size-exceeded,total-messages-spooled,num-messag... See more...
Hi,  I have a csv file that is updated by a script once a minute.  The output is similar to:  time,queuename,vpn,last-message-id-spooled,max-message-size-exceeded,total-messages-spooled,num-messages-spooled,current-spool-usage-in-mb,bind-count,recordsinperiod,eol 2021-05-20_10-20,q.static.prp.solacequeue, test_uat_de, 117446717393, 0, 40340019 , 0, 0, 25 ,0,eol 2021-05-20_10-20,q.static.prp.solacequeue-number2, test_uat_de, 117493, 0, 4039 , 0, 0, 25 ,0,eol 2021-05-20_10-19,q.static.prp.solacequeue, test_uat_de, 0, 0, 0 , 0, 0, 0 ,0,eol 2021-05-20_10-19,q.static.prp.solacequeue-number2, test_uat_de, 0, 0, 0 , 0, 0, 0 ,0,eol Now, I want to create a search query that will show only the last update in the csv file and show me the result like this:  q.static.prp.solacequeue, test_uat_de, 117446717393, 0, 40340019 , 0, 0, 25 ,0,eol q.static.prp.solacequeue-number2, test_uat_de, 117493, 0, 4039 , 0, 0, 25 ,0,eol Tried using the search below, but the output still shows everything that happened during the day, instead those only 2 lines.  index=* sourcetype=queues | stats latest(time) by time queuename last_message_id_spooled current_spool_usage_in_mb bind_count recordsinperiod   What am I missing? Thanks, Gabriel
Hi team, I'm trying to build a search which will search for the alerts which have been triggered for a hosts during specific period of time,  which are in the lookup and ideally I to show the result... See more...
Hi team, I'm trying to build a search which will search for the alerts which have been triggered for a hosts during specific period of time,  which are in the lookup and ideally I to show the results with below: - hostname - decription of the alert - when alert was triggered I would appreciate any guide or assistance.    Kind regards
On latest version 8.0.2, if license master is down will search work or it will wait for 72 hours & then stop?
Hi splunk community, In my environment I have 1 indexer each in 2 different datacenters, 1 searchhead, 1 clustermaster , X forwarders. Is it possible for the following configuration in server.conf,... See more...
Hi splunk community, In my environment I have 1 indexer each in 2 different datacenters, 1 searchhead, 1 clustermaster , X forwarders. Is it possible for the following configuration in server.conf, with 1 searchpeer each, in 2 sites, if replication is not required? Again, I do not need replication. site_replication_factor = origin:1, total:1 site_search_factor = origin:1, total:1   Or can the searchpeer in the different datacenters, form a single site, 2 node cluster, with replication_factor=1 search_factor=1   I know that the recommendation is for latency to be <100ms. Thanks for your inputs!
Hi, data set to search in field1:  ("foo", "bar", execute", "thanx", "tax", "trade" ) if field1 includes any random 3 of the strings in the data set, It will show up in the search result. 1. field... See more...
Hi, data set to search in field1:  ("foo", "bar", execute", "thanx", "tax", "trade" ) if field1 includes any random 3 of the strings in the data set, It will show up in the search result. 1. field1 = " book car test sell buy trade execute". -- > WONT match at least tree of the items in the data set. 2. field1="book bar execute tax test". --> WILL match since "bar", "execute" and "tax" are included in field1 3. field1="test foo exec bar car". --> WONT match at least tree of the items in the data set. Please let me know how I can do it. Thanks,
there seems to be two Checkpoint addons, one released by Splunk and other by Checkpoint themselves. Splunk developed seems to be the latest April 2, 2021 and Checkpoint developed was last updated o... See more...
there seems to be two Checkpoint addons, one released by Splunk and other by Checkpoint themselves. Splunk developed seems to be the latest April 2, 2021 and Checkpoint developed was last updated on Jan. 28, 2020   can someone please recommend which one should I actually use ?  
I am looking to move the platypus app from one server to another and was able to do that successfully. But i cannot finf where to find the saved dashboards in platypus app which i can copy to the new... See more...
I am looking to move the platypus app from one server to another and was able to do that successfully. But i cannot finf where to find the saved dashboards in platypus app which i can copy to the new server . Can anyone help me to find the location of the dashboards where it is 
How to convert below  _time    Server      col1     col2       col3 8am       SerA          1           2             3 9pm       SerA          5           6             7 into  _time   ... See more...
How to convert below  _time    Server      col1     col2       col3 8am       SerA          1           2             3 9pm       SerA          5           6             7 into  _time       Category          value         8am           SerA_col1      1 8am           SerA_col2      2 8am           SerA_col3      3 9pm           SerB_col1      5 9pm           SerB_col2      6 9pm           SerB_col3      7    
Checkpoint logs through OPSEC LEA have stopped logging into Splunk. TA version is 4.3.1 Upon checking the TA logs, below is the error message I am seeing ERROR: session end reason: no communicatio... See more...
Checkpoint logs through OPSEC LEA have stopped logging into Splunk. TA version is 4.3.1 Upon checking the TA logs, below is the error message I am seeing ERROR: session end reason: no communication   Please advise. Thanks!
I have a data set as seen below. exec                   arguments /bin/sh sh -c uname -p ** /dev/null /sbin/ldconfig /bin/sh /sbin/ldconfig -p /bin/uname uname -m ... See more...
I have a data set as seen below. exec                   arguments /bin/sh sh -c uname -p ** /dev/null /sbin/ldconfig /bin/sh /sbin/ldconfig -p /bin/uname uname -m   as seen above sample data, some of the argument fields have 3 lines on them, some of them 2 or 5 etc. all of them are different.  I would like to get the following result exec                           arguments ---------------------------------------- /bin/sh                      sh -c uname -p ** /dev/null /sbin/ldconfig        /bin/sh /sbin/ldconfig -p /bin/uname             uname -m How can I get this result? Thanks,