All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I guys. Recently i came in trouble to resolve the "puzzle" described in Title... What we need 1) Trigger the "Job_Start", always 2) Monitor its processation Variables 1) "Job_Start" is ... See more...
I guys. Recently i came in trouble to resolve the "puzzle" described in Title... What we need 1) Trigger the "Job_Start", always 2) Monitor its processation Variables 1) "Job_Start" is dynamic, i can have it at 01:00 so at 04:30, 15:00 or 17:15 (and so on....h24): so "Job_Start" is the beginning point!!! 2) "Job_End" is the great variable: it could exists, as NOT AT ALL, and the focal point is to check if IT EXISTS in a range time of max 2h from "Job_Start" What i originally did, tag=mytag host=server earliest=-3h |transaction maxspan=120m maxevents=-1 startswith="Job_Start" endswith="Job_End" host,source |[...........do all if statements by "duration" field] ... ok, but what if Job never ends??? tag=mytag host=server earliest=-3h |transaction maxspan=120m maxevents=-1 startswith="Job_Start" host,source |eval CHECK_END=if(match(_raw,"Job_End"),_time,"X") |[...........do all if statements by "duration" field plus "CHECK_END" variable] ... ok, this is a good compromise to work... Now, what i really scheduled (every 15 minutes), after thinking of possible missing timings or other things... tag=mytag host=server earliest=-3h|sort + _time|eventstats first(_time) as tSTART last(_time) as tEND|eval RANGE=round((tEND-tSTART)/60) |eval CHECK_START=if(match(_raw,"Job_Start"),_time,"X") |eval CHECK_END=if(match(_raw,"Job_End"),_time,"X") |stats min(CHECK_START) as START min(CHECK_END) as END last(RANGE) as RANGE |where START!="X" |eval DUR=round((END-START)/60)|eval PASS=round((now()-START)/60) |eval msg=if( (START="X") AND (END="X"),"NO Job_Start last "+RANGE,msg)|eval nota="already skipped with where above!" |eval msg=if( (START!="X") AND (END="X") AND (PASS>120),"Job_Start no Job_End after "+PASS,msg) |eval msg=if( (START!="X") AND (END!="X") AND (PASS>120),"Job_Start with Job_End after "+DUR,msg) |eval host="server"| eval source="mylog" |eval displaythis="LOG:"+source+"__"+msg+"__[test]" | eval TimeStamp=strftime(now(),"%Y%m%d.%H%M%S") | table TimeStamp host displaythis ... the schedule is running... still have to test its real effects... Now, some advice or help about what did above, and WHAT COULD BE DONE BETTER AND MORE EFFICIENTLY ?
I think this is a sample snip of the auth0 xml response but there is no attribute to use that has group information (see below) tried authentication.conf role = Group (first line) splunkd.log:... See more...
I think this is a sample snip of the auth0 xml response but there is no attribute to use that has group information (see below) tried authentication.conf role = Group (first line) splunkd.log:04-03-2020 17:39:57.331 +0000 ERROR Saml - No value found in SamlRes ponse for match key=saml:AttributeStatement/saml:Attribute attrName=role = Group s err=No nodes found for xpath=saml:AttributeStatement/saml:Attribute splunkd.log:04-07-2020 16:30:37.575 +0000 ERROR Saml - No value found in SamlResponse for match key=saml:AttributeStatement/saml:Attribute attrName=Groups err=No nodes found for xpath=saml:AttributeStatement/saml:Attribute there is an auth0 api that has group info, how do i get splunk to access it "myname.auth0.com/api/v2/users/{id}/roles" Response snip
Hello everybody, I see a strange behaviour with data model acceleration. I have a data model accelerated over 3 months. According to internal logs, scheduled acceleration searches are not skipped ... See more...
Hello everybody, I see a strange behaviour with data model acceleration. I have a data model accelerated over 3 months. According to internal logs, scheduled acceleration searches are not skipped and they complete providing results. However if I run a tstats search over last month with “summariesonly=true”, I do not get any values back; if I run the same tstats search with “summariesonly=false”, I do get expected results. Again, if I run the tstats search over the last 90 days with "summariesonly=true", I get some values back. Have you ever faced a similar situation? Could this depend upon the small number of events, thus upon buckets not rolled yet? Please not that this does not look like a generic "recent data not yet summarised" issue, because: acceleration searches complete with success every 5 minutes; data model summary is 100% built; I am missing data at least from the last month. Thank you for your support!
Need help in the query builder. | dbxquery connection="ITDW" shortnames=true query="SELECT [Incident_Number], [Assignee], [Last_Modified_By], [Customer], [Customer_Site_Group], [Customer_Site], ... See more...
Need help in the query builder. | dbxquery connection="ITDW" shortnames=true query="SELECT [Incident_Number], [Assignee], [Last_Modified_By], [Customer], [Customer_Site_Group], [Customer_Site], [Summary], [Notes], [Priority], [Assigned_Support_Group], [Status], [Status_Reason], [Resolution], [Reported_Date], [Last_Resolved_Date], [Last_Modified_Date], [Owner_Group], [Submit_Date] FROM [shared].[ITSM_INC_MAIN] INC WHERE [Submit_Date] BETWEEN DATEADD(D,-3,GETDATE()) AND GETDATE() AND [Summary] like '%%' " I am facing challenge in getting the date changed all time for a search. Submit_Date 2020-04-04 11:35:51.0 2020-04-04 11:35:57.0 2020-04-04 11:36:13.0 2020-04-04 11:37:22.0 here is how Submit date format comes. Also below is the line where am facing challenge in providing a defined time. WHERE [Submit_Date] BETWEEN DATEADD(D,-3,GETDATE()) AND GETDATE() please help me in editing this line
I would like to do some math operation of retrieved count of each values. Eg: 318*5.5 + 418*2.5 + 54*5 + 83*2 and get the total from the resulted output(SS attached) Query used:- index=omi_... See more...
I would like to do some math operation of retrieved count of each values. Eg: 318*5.5 + 418*2.5 + 54*5 + 83*2 and get the total from the resulted output(SS attached) Query used:- index=omi_qa host=DEFRNC* sourcetype=all_events_custom_attributes SEVERITY IN (CRITICAL,MAJOR,MINOR) OR (SEVERITY=WARNING AND APPLICATION=NNMi) | eval {idx} = elt | stats latest(CLIP) as CLIP, values(UMN) as UMN by ID | lookup clipUMNs.csv UMN OUTPUTNEW Solution | search CLIP != "NULL" AND CLIP != "TRUE" | where Solution = "Escalation"
Hi All, i'm trying to install Splunk Ent. on a windows server to capture data from NVM on 1 ASA in my environment. following this guide: https://www.cisco.com/c/en/us/support/docs/security/a... See more...
Hi All, i'm trying to install Splunk Ent. on a windows server to capture data from NVM on 1 ASA in my environment. following this guide: https://www.cisco.com/c/en/us/support/docs/security/anyconnect-secure-mobility-client/200600-Install-and-Configure-Cisco-Network-Visi.html do i need to install that linux box? given it's a pretty small environment? it says this: In a typical distributed Splunk Enterprise deployment, the collector should be run on either a standalone 64-bit Linux system or aSplunk Forwarder node running on 64-bit Linux. but, i have a splunk forwarder installed in my enterprise install on the windows box, does that mean it's sufficient? i can carry on without installing the Ubuntu box? thanks.
I've been playing around a bit with some NFL statistical data (source data is from http://nflsavant.com/about.php). Each passing play has a field called PassType where the six possible values are sho... See more...
I've been playing around a bit with some NFL statistical data (source data is from http://nflsavant.com/about.php). Each passing play has a field called PassType where the six possible values are short left, short middle, short right and deep left, deep middle, and deep right. I'd like to make a choropleth style heat map or table to visualize the data. The visualization would be two rows and three columns. Deep passes on top, short passes on bottom with left right and middle where you'd expect. Is the best way to visualize this data with a cloropleth map? Is there something easier/better? EDIT: I've kind of worked through this. First I started by getting my data into single number visualizations where they are organized is a logical manner (deep passes on top, short on bottom, left, right and middle respectively). Then I kind of figured out how to do custom placemarks in a kml file to do the heat map. This was quite the mental exercise but I think I'm kind of there - the end result isn't as pretty as I was hoping, but it is fun to play with the data.
Hey All, Back again with another interesting question. How do we get the number of hits per day for linux/livesite servers. Example of the query: (index=cloudfoundry or index=xxx_xxxx) s... See more...
Hey All, Back again with another interesting question. How do we get the number of hits per day for linux/livesite servers. Example of the query: (index=cloudfoundry or index=xxx_xxxx) sourcetype=access_xxxx_wcookie(/xx/xxx/content-adpater-webservice) AND (NOT- Pre NOT -Post) AND (status=200) AND (host=n or host= a or host=b or host=t or host=r or host=m)| stats count Any help is appreciated. Thanks, Mike
When I run a report of roles that have been modified in our splunk cloud instance I see these entries Date Time user action info role_name 04-06-20 16:50:18 _ops_adm... See more...
When I run a report of roles that have been modified in our splunk cloud instance I see these entries Date Time user action info role_name 04-06-20 16:50:18 _ops_admin edit_user granted _ops_admin 04-06-20 16:50:15 _ops_admin edit_user granted _ops_admin 04-06-20 16:49:52 _ops_admin edit_user granted _ops_admin 04-06-20 16:44:06 _ops_admin edit_user granted _ops_admin 04-06-20 16:44:03 _ops_admin edit_user granted _ops_admin We dont have that user or role name defined in our instance. Anyone have an idea of what this is and if I should be concerned?
In the event: cs3Label=HostName_Ext cs3=xx.xx.x.xx cs5Label=Deep src cs5=0 cs10Label=Deep_zone cs10=0 cn2Label=Score cn2=71 cn4Label=Deep_threat_type cn4=5 dmac=00:xx:xx:xx:xx ============== ... See more...
In the event: cs3Label=HostName_Ext cs3=xx.xx.x.xx cs5Label=Deep src cs5=0 cs10Label=Deep_zone cs10=0 cn2Label=Score cn2=71 cn4Label=Deep_threat_type cn4=5 dmac=00:xx:xx:xx:xx ============== props.conf [cefevents] NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false pulldown_type = 1 REPORT-cefevents = cefHeaders,cefKeys,cefCustom tranforms.conf [cefHeaders] REGEX = CEF:\s?(?<cef_cefVersion>\d+)\|(?<cef_vendor>[^|]*)\|(?<cef_product>[^|]*)\|(?<cef_version>[^|]*)\|(?<cef_signature>[^|]*)\|(?<cef_name>[^|]*)\|(?<cef_severity>[^|]*) [cefKeys] REGEX = (?:_+)?(?<_KEY_1>[\w.:\[\]]+)=(?<_VAL_1>.*?(?=(?:\s[\w.:\[\]]+=|$))) REPEAT_MATCH = True CLEAN_KEYS = 1 [cefCustom] REGEX = (\S+)=([^=]*)\s+(?:\1Label)=([^=]+)(?:(?:\s\w+=)|$) FORMAT = $3::$2 KEEP_EMPTY_VALS = True ================== cefHeaders are extracting as expected. but cefKeys and cefCustom is not able to extract the key value pairs. please advise
I've been attempting to add appdynamics to a set of NodeJS components we have. We use a private AppD Controller within the company firewall running version 4.4.3. I have only been able to get the age... See more...
I've been attempting to add appdynamics to a set of NodeJS components we have. We use a private AppD Controller within the company firewall running version 4.4.3. I have only been able to get the agent to communicate with the controller via the java proxy on version 4.4.3. Whenever I try to configure libagent:true the agent does start and I get an http error in agent logs (see logs below). I'd additionally tried with different module versions and different nodeJs versions with no luck.  Below shows our appD profile added to server.js. App Dynamics Profile: { controllerHostName: 'XXXXXXXXXXXX', controllerPort: '8181', controllerSslEnabled: true, accountName: 'XXXXXXX', accountAccessKey: 'XXXXXXXXXXX', applicationName: 'icg-msst-icgbuild-loki-167539-DEV', tierName: 'loki-ui', nodeName: 'loki-ui', libagent: true, debug: true } [DEBUG] Appdynamics agent logs: /tmp/appd/66b173439b00d9f3b30dbc626af4dbd8 NodeJS version: 10.10.0 (build & runtime) AppD NodeJS agent version: 20.3.1 OS version: Red Hat Enterprise Linux Server release 6.10 (Santiago) [Log file redacted] ^ Post edited by @Ryan.Paredez. Please do not attach or paste log files into community post for security and privacy reasons. 
I have 2 log files from different sources. Both log files have statements either indicating a "Transaction-Start" or "Transaction-End" . "EPOCH" is a field common in both log files indicating the t... See more...
I have 2 log files from different sources. Both log files have statements either indicating a "Transaction-Start" or "Transaction-End" . "EPOCH" is a field common in both log files indicating the timestamp of either start or end of a transaction. Now I want to write a query that fetches EPOCH of "Transaction-Start" from log file 1, call it as, say start and EPOCH of "Transaction-End" from log file 2, call it as, say end . Following this, I want to find the difference between end and start and display only those logs with a difference higher than a threshold, say 10000 What I have tried writing is below : index=someIndex ENVIRONMENT="someEnv" (source="/log/source1.log" "Transaction-Start" "EPOCH" as start) OR (source="/log/source2.log" "Transaction-End" "EPOCH" as end) | eval difference=end-start | where difference>10000 But this is not working. Looking for help in composing this search in the right way.
Hi, Is it possible to get the "Evaluate Health Rules" status through the REST API??
Hi, when trying to remove the automatic data model acceleration enforcement from Data Inputs --> Data Model Acceleration Enforcement I'm receiving this error and I can't save: Encountered the fol... See more...
Hi, when trying to remove the automatic data model acceleration enforcement from Data Inputs --> Data Model Acceleration Enforcement I'm receiving this error and I can't save: Encountered the following error while trying to update: The following required arguments are missing: manual_rebuilds. I'm logged as admin. Any idea? Thank you
I have the below search: index=cd source=jenkins pr_number=* | stats count as Total , earliest(_time) as start, latest(_time) as stop by pr_number name stage.steps{}.stage | eval start=strftime(... See more...
I have the below search: index=cd source=jenkins pr_number=* | stats count as Total , earliest(_time) as start, latest(_time) as stop by pr_number name stage.steps{}.stage | eval start=strftime(start, "%d/%m/%y - %I:%M:%S:%p") | eval stop=strftime(stop, "%d/%m/%y - %I:%M:%S:%p") | eval diffTime=stop - start the evals for start and stop are working fine but the eval for difftime is not working. As I'm using strftime how can i calculate the duration between the 2 dates and times?
Hello guys, I've got a dashboard in which it has two hidden panels depending on a textbox. When the textbox is empty, the panels won't show, as expected. When i add values into it, the panels a... See more...
Hello guys, I've got a dashboard in which it has two hidden panels depending on a textbox. When the textbox is empty, the panels won't show, as expected. When i add values into it, the panels appear, also as expected. However, if i take those values out of that multiselect then the panels won't disappear and will display the "waiting for input". This is what i've got primarily to hide / unhide the panels:   <progress>       <condition match="'job.resultCount' > 0">       <set token="show_summary">true</set> </condition> <condition> <unset token="show_summary"/>   </condition> </progress> <set token="show_summary">true</set> Is there any event handler to deal with this in a condition match or something? e.g. (...)   <progress>       <condition match="'job.Awaitinginput' == 1">        <set token="show_summary">true</set> </condition> (...)
We have a working and up and running Splunk DB Connect installation on an onprem HeavyForwarder that we installed the SplunkCloud app on. So now all data is sent to our SplunkCloud instance. The ... See more...
We have a working and up and running Splunk DB Connect installation on an onprem HeavyForwarder that we installed the SplunkCloud app on. So now all data is sent to our SplunkCloud instance. The problem is that the Dashboards for monitoring of DB Connect Health is empty. I'm guessing that this is caused by the SplunkCloud app that forwards all data including the _internal index to SplunkCloud, so the dashboards do not have any data to display. I see 2 solutions, either install the Health dashboards with datamodels and all, in SplunkCloud, or somehow configure the HF to retain the debug data locally. Has anyone solved this problem, and how did you do it?
eventtype.conf [test_event] search = sourcetype=test tags.conf [eventtype=test_event] email = enabled alert = enabled can I identify through search query to "test_event" is associated with more... See more...
eventtype.conf [test_event] search = sourcetype=test tags.conf [eventtype=test_event] email = enabled alert = enabled can I identify through search query to "test_event" is associated with more than one root data model (email, alert)
"We are currently using "Splunk App for Microsoft SharePoint" (version-0.2.1) to monitor SharePoint, but in the SplunkBase it was mentioned, "THIS APP IS ABANDONED AND WILL NOT BE UPDATED". Can a... See more...
"We are currently using "Splunk App for Microsoft SharePoint" (version-0.2.1) to monitor SharePoint, but in the SplunkBase it was mentioned, "THIS APP IS ABANDONED AND WILL NOT BE UPDATED". Can anybody please point to the correct app/add-on to monitor Sharepoint in Splunk(version - 8.0.0). @niketnilay Can you please suggest something.
Hi, in the alert for the Website Monitoring app, there is a check: tag!="exclude_from_alerts" Which seems to control exclusion of a specific site from alerts. But I have no idea how to s... See more...
Hi, in the alert for the Website Monitoring app, there is a check: tag!="exclude_from_alerts" Which seems to control exclusion of a specific site from alerts. But I have no idea how to set this up. Setting tag=exclude_from_alerts or exclude_from_alerts=true both just result in Errors in the log. thx afx