All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

HI  Is it possible to do left outer join after using two |mstats commands like below? I have Process_Name common to both - but I want the ones that are not in the seconds |mstat command only.    ... See more...
HI  Is it possible to do left outer join after using two |mstats commands like below? I have Process_Name common to both - but I want the ones that are not in the seconds |mstat command only.     | mstats prestats=t min("mx.replica.status") min("mx.process.resources.status") WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY service.name replica.name service.type | eval threshold="", pid="", cmd="", "host.name"="", "component.name"="" | mstats append=t prestats=t min("mx.process.threads") WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=10s BY pid cmd service.type host.name service.name replica.name component.name threshold        
I have Four Dashboards Level 1- Level 2- Level 3 - Level 4 Level 1 is a saved search and it has a field called months i want to drilldown using the month value to the next level 2 dashboard which ... See more...
I have Four Dashboards Level 1- Level 2- Level 3 - Level 4 Level 1 is a saved search and it has a field called months i want to drilldown using the month value to the next level 2 dashboard which is also a saved search using tok_mon =$click.name2$ How do i Pass a token from a dashboard with saved search to another with saved search same from level 2 to level 3 and level 3 to level 4
If i have a saved report that is scheduled to run every 1 hour. I have used that saved search as a reference to a search query in a dashboard panel. My question is that whenever that dashboard ... See more...
If i have a saved report that is scheduled to run every 1 hour. I have used that saved search as a reference to a search query in a dashboard panel. My question is that whenever that dashboard is loaded will it run the saved search again or will it be auto loaded from the last scheduled run of the saved search.
we need to delete three files from the index  I have used the |delete command to clean the indexed data and it’s deleted but still its showing under the source field. source='/var/log/splunk/syslog... See more...
we need to delete three files from the index  I have used the |delete command to clean the indexed data and it’s deleted but still its showing under the source field. source='/var/log/splunk/syslog/******/********' | delete source='/var/log/splunk/syslog/******/********' | delete source='/var/log/splunk/syslog/******/********' | delete
Hello, I have a distributed environment in which there are a cluster of indexers, 3 heavy forwarders and 3 search head How do you guys managed the high availability of add-on/TA on HF clusters ? B... See more...
Hello, I have a distributed environment in which there are a cluster of indexers, 3 heavy forwarders and 3 search head How do you guys managed the high availability of add-on/TA on HF clusters ? By example I would like to install AWS Add-on to get data by REST API: - If I configure TA on 3 HF I'll get 3 times same data. (Indexed 3 times, my licenses will explode) - If I configure TA on only one HF I'd have problems with high availability in case of this node have failure. Thanks      
Hi, I have to create a trending chart for 30 days using the below search .I am not getting the trending using timechart and chart .  index=s sourcetype=Fire | fillnull value="" | eval trmsc = cas... See more...
Hi, I have to create a trending chart for 30 days using the below search .I am not getting the trending using timechart and chart .  index=s sourcetype=Fire | fillnull value="" | eval trmsc = case(Environment="Production" OR Environment="PSE","Workstations Host Intrusion Detection Prevention Agents Not Reporting") | rename Reporting_Status as Compliance_Status | replace Reporting with Compliant "Not Reporting" with Noncompliant "Not Reporting (possibly due to ITAM FQDN field not populated)" with NotReporting "Not Reporting (ITAM FQDN field not populated)" with NotReporting in Compliance_Status | stats count(eval(Compliance_Status=="Compliant" OR Compliance_Status=="Excluded from reporting, yet is reporting")) as Compliant count(eval(Compliance_Status=="Noncompliant" OR Compliance_Status=="NotReporting" OR Compliance_Status=="Error")) as NonCompliant by trmsc  | append [| search index=c sourcetype=Asset | fillnull value="" | eval trmsc = case(Cloud_Platform="Azure","Azure Baseline Noncompliance",Cloud_Platform="Aws","AWS Baseline Noncompliance") | search Account_Environment="PROD" OR Account_Environment="PRD" OR Account_Environment="PSE" | stats sum(CountOf_Compliant_AssetsTested) as Compliant sum(CountOf_Noncompliant_AssetsTested) as NonCompliant by trmsc] | eval date_wday=strftime(_time,"%A") | search date_wday="Monday" | bin _time span=1d | eventstats count by trmsc | chart count(trmsc) over _time by Compliance_Status Please let me know how to get trending chart for the above search .
we  would like to integrate  the Auth0 data into Splunk enterprise  what will be the best way   ? is there any apps or add-ons available and can be used 
I am trying to remove some unwanted characters before the backslash, but it is ignoring some machines as they have different name standards. I want to remove the domain name and machine name from ... See more...
I am trying to remove some unwanted characters before the backslash, but it is ignoring some machines as they have different name standards. I want to remove the domain name and machine name from the Local Administrator group.  My data comes like this in one string as below labmachine000r\administrator labmachine000d\support  labdomain\admingroup labdomain\helpdesk I managed to remove the characters before the backslash using this   | eval adminlocal=replace(adminlocal, "\w+(\\\\)+","")   and my result is like below: administrator support  admingroup helpdesk That is working fine for the machine above, but if I have a machine name like "L-02labmachine000r", the replace command gives the result like this: L-administrator L-support admingroup helpdesk Is there any way to adjust my replace command to cover that machine name?    
Hi, I have json data being written to a log file and the log file is being forwarded to single Splunk index 'ti-l_asl'. The problem I have is the json data contains a field called 'index' which I want... See more...
Hi, I have json data being written to a log file and the log file is being forwarded to single Splunk index 'ti-l_asl'. The problem I have is the json data contains a field called 'index' which I want to transform into 'sourcetype' so it can be search on in Splunk. Is there a way I can do this without changing the system which writes the json to the log file i.e. transform the field name from 'index' to 'sourcetype' as part of the forwarder processing or some kind of pre-processing in Splunk before it is assigned to index 'ti-l_asl' ?
Hi, Hope everyone are safe and doing great.! I have a project to do with column header merging. Are we able to achieve the below format in splunk. If so, can someone please provide some suggestion... See more...
Hi, Hope everyone are safe and doing great.! I have a project to do with column header merging. Are we able to achieve the below format in splunk. If so, can someone please provide some suggestions Please find the attachment for your reference. Thank you.
  Hi, There are more than 1000 UF Windows and Linux systems installed. It is a distributed environment with around 100 systems at each location, one indexer deployed, and each indexer connected to ... See more...
  Hi, There are more than 1000 UF Windows and Linux systems installed. It is a distributed environment with around 100 systems at each location, one indexer deployed, and each indexer connected to a search head. Our next step is to verify that all the hosts have been configured properly and are reporting to the indexer. In cases where a host does not have the source or sourcetype, we need to update the list to match host and not match host against the below lookup table. Could someone please suggest the spl.   source sourcetype WinEventLog:Security WinEventLog WinEventLog:Application WinEventLog WinEventLog:System WinEventLog     /var/log/haproxy/haproxy.log haproxy /var/log/audit/audit.log audit /var/log/maillog postfix_syslog /var/log/messages linux_messages_syslog /var/log/cron cron   Thanks Manickam
Hello,   I am still trying to figure out the framework of how things work (please note I am not admin).   There is a dashboard which has some radio buttons which trigger specific searches and the... See more...
Hello,   I am still trying to figure out the framework of how things work (please note I am not admin).   There is a dashboard which has some radio buttons which trigger specific searches and the results are displayed in the dashboard.   I want to trigger these searches ad hoc in the Search webpage. So I need to: Get a search alias/link/id for each of the searches in the dashboard Use these aliases to trigger the same search manually   I would prefer to use a REST API command directly in my PowerBI, is that possible? If not, I would still prefer to use a REST API command in the Search webpage   Unfortunately, the following does not work for me: | rest /services/data/ui/views/   But this works: | rest splunk_server=local servicesNS/-/-/data/ui/views/   Can you help me with the right code please?   Thanks!
Hello, I am new to Splunk and I would like to create an app for my dashboards that would be visible on all Search Heads. Can anyone help?
I'm wondering how to properly onboard a file containing: - A header with file list - A separator (a horizontal line consisting of a sequence of dash characters) - Events - one per line - in a tab-... See more...
I'm wondering how to properly onboard a file containing: - A header with file list - A separator (a horizontal line consisting of a sequence of dash characters) - Events - one per line - in a tab-delimited (at least that's what I know for now - it's still to be confirmed) format In general, the file format is supposed to have a constant set of fields so your typical delimited extraction should work but I have two issues: 1) The separator - I suppose the only way to get rid of it would be to match in on regex and redirect to null queue. Not pretty but doable. 2) The date - should FIELD_NAMES and TIMESTAMP_FIELDS work even without INDEXED_EXTRACTIONS? I'm also wondering how to tackle daylight saving for  timestamps without TZ info. I can set, for example, TZ=CET for given sourcetype but if the source applies daylight saving and reports events in CET or CEST depending on time of the year, my events will be an hour off for half a year, right?
Hi All, I try to monitor Cisco UCCE v12.6 with AppDynamics. there is documentation but its lack of guidance. I Already enable performance monitoring on cloud connect but the application doesn't show... See more...
Hi All, I try to monitor Cisco UCCE v12.6 with AppDynamics. there is documentation but its lack of guidance. I Already enable performance monitoring on cloud connect but the application doesn't show up in controller. Do you guys have any idea and same problem? here is the documentation:  https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cust_contact/contact_center/icm_enterprise/icm_enterprise_12_6_1/configuration/guide/ucce_b_serviceability-guide-for-cisco-unified_12_6/ucce_b_serviceability-guide-for-cisco-unified_12_6_chapter_010000.html Regards, Ruli
I have a single sourcetype where i need to differentiate the same sourcetype into 3 different categories based on OS field .I tried using append but since takes lot of memory by calling same sourcety... See more...
I have a single sourcetype where i need to differentiate the same sourcetype into 3 different categories based on OS field .I tried using append but since takes lot of memory by calling same sourcetype 3 different times ,i need a different approach instead of append. My code : index=A sourcetype=Server | fillnull value="" | eval OS=case(like(Operating_System,"%Windows%"),"Windows",like(Operating_System,"%Linux%"),"Linux",like(Operating_System,"%Missing%"),"Others",like(Operating_System,"%Solaris%"),"Solaris",like(Operating_System,"%AIX%"),"AIX",1=1,"Others") | eval Environment=case(like(Environment,"%Prod%"),"Prod",like(Environment,"%Production%"),"Prod",1=1,Environment) | search OS="Linux" OR OS="Solaris" AND Environment="PSE" OR Environment="Prod" AND Eligibility="Upper" AND Status="Installed" | eval group="Unix Server" | append [| search index=A sourcetype=Server | fillnull value="" | eval OS=case(like(Operating_System,"%Windows%"),"Windows",like(Operating_System,"%Linux%"),"Linux",like(Operating_System,"%Missing%"),"Others",like(Operating_System,"%Solaris%"),"Solaris",like(Operating_System,"%AIX%"),"AIX",1=1,"Others") | eval Environment=case(like(Environment,"%Prod%"),"Prod",like(Environment,"%Production%"),"Prod",1=1,Environment) | search OS="Windows" AND Environment="PSE" OR Environment="Prod" AND Eligibility="Upper" AND Hardware_Status="Installed" | eval group="Windows "]|stats count by group Can this be merged into one single query without using append ? This will help me to not running same sourcetype 2 times.
Hi @gcusello , Could you please help me to monitor HA proxy logs of server in Splunk. What should be the steps that needs to be carried out. Also user is saying that "The HAProxy container is set u... See more...
Hi @gcusello , Could you please help me to monitor HA proxy logs of server in Splunk. What should be the steps that needs to be carried out. Also user is saying that "The HAProxy container is set up with rsyslog, using the omfwd module to forward traffic to the relevant IP address that has been set up in the config." Regards, Rahul  
How to make the words colourful? What needs to be added at the source?   <option name="drilldown">none</option>    
Hi, I am sure this question must have asked multiple times and infact I've come across multiple posts but I am still unanswered. So I am a Splunk developer/analyst who is looking to integrate my Sp... See more...
Hi, I am sure this question must have asked multiple times and infact I've come across multiple posts but I am still unanswered. So I am a Splunk developer/analyst who is looking to integrate my Splunk Enterprise with OpsGenie to send alert notifications but when I look at the integration here https://support.atlassian.com/opsgenie/docs/integrate-opsgenie-with-splunk/ it says to install an app in Splunk base and when I go to that app https://splunkbase.splunk.com/app/3759/ it says "This app is NOT supported by Splunk. Please read about what that means for you here." What does this mean? As an Admin we can see the app when we browse in Splunk. Does it mean if we install it it won't break or could break other things? Let me know if anyone has done this integration on their on-prem Splunk enterprise architecture. Any input is appreciated.
Hello Splunk Community,    I have a merged event which shows if a service is running or down. Here is an example of the event in splunk:   ********************************************************... See more...
Hello Splunk Community,    I have a merged event which shows if a service is running or down. Here is an example of the event in splunk:   ******************************************************************************* All services are running 1092827|default|service1is running 37238191|default|service2 is running 16272373|default|service3 is running *******************************************************************************   How can I split the merged events so I can extract the service name, status (running/down) & host? 16272373|default|service3 is running Host |      | ServiceName is Status