All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

By the following query, I can list the hosts status and when they have their status change:   index=snmptrapd | table _time Agent_Hostname alertStatus_1   with this query the _time values are re... See more...
By the following query, I can list the hosts status and when they have their status change:   index=snmptrapd | table _time Agent_Hostname alertStatus_1   with this query the _time values are readable as for example,  2020-08-19 21:07:50 However, when I only want to find the latest time when a host has a certain status, by the following query,   index=snmptrapd | stats latest(_time) by Agent_Hostname alertStatus_1   then  Agent_Hostname alertStatus_1 latest(_time) l18-tempmon critical 1597896470 l20-tempmon critical 1597901380 l20-tempmon normal 1597891753   How can I make the field for latest(_time) still readable as before?
Hi All, Can someone advice what is wrong with this  following query. |dbquery wmsewprd "select * from sys_code_type where rec_type='C'" when I run this ,I get an error.   But when I use wmsew... See more...
Hi All, Can someone advice what is wrong with this  following query. |dbquery wmsewprd "select * from sys_code_type where rec_type='C'" when I run this ,I get an error.   But when I use wmsew.sys_code_type instead of sys_code_type I get the exact output.Can any help me this.Our requirement is to work |dbquery wmsewprd "select * from sys_code_type where rec_type='C'" . wmsewprd -- External Database. Regards, Rahul
I am using below query to fetch Incident from the subject line:— rex field=subject max_match=0 “(?<Incident>INC\d+)” however, for below subject line i am unable to fetch incident:— [SecMail:] INC0... See more...
I am using below query to fetch Incident from the subject line:— rex field=subject max_match=0 “(?<Incident>INC\d+)” however, for below subject line i am unable to fetch incident:— [SecMail:] INC000027755501|TAS00003760220 wrdna904xusa73|server is unreachable | INC000027790458| INC000027882562
I want to do a security log monitoring and using splunk alert feature to send email notifications.   The security log and trigger condition is like this:   _time, SessionID,  filedA, fieldB yyyy... See more...
I want to do a security log monitoring and using splunk alert feature to send email notifications.   The security log and trigger condition is like this:   _time, SessionID,  filedA, fieldB yyyymmdd,11111, xxxx,yyyyy yyyymmdd,11111,bbbb,ccccc yyyymmdd,22222,bbbb,ccccc ........   as this is a syslog monitoring task ,  I want to trigger an alert whenever a newly SessionID is detected. It means the same SessionIDwill not be notified twice.   My SPL will be like below: ....| stats count by SessionID   Regarding to the alert configuration, which condition should I use? Or is it possible to do this mostly in the base SPL?   Regards,    
Hi, I have Splunk enterprise installed on a ubuntu 18.04 server. When I enable the Splunk Light Weight Forwarder and restart splunk enterprise the web UI no longer loads for Splunk Enterprise.
Hi there! I have a multi-select input that gets dynamically populated by a search and I would like it to automatically set its default value to all possible values returned from the search. Just to ... See more...
Hi there! I have a multi-select input that gets dynamically populated by a search and I would like it to automatically set its default value to all possible values returned from the search. Just to clarify here, I don't want to use "All / *" as the default. And to clarify even more what I'm looking for, let's say my search returns Field A B C I would like my multi-select to automatically set its default value to A B C Would that be possible? TIA!
I can't see any dashboard showing numbers (data) in Palo Alto App. - App version 6.1.1 & TA version 6.1.1 - Splunk version 7.2.9 - Data is being ingested from Syslog > UF to Splunk Cloud. - D... See more...
I can't see any dashboard showing numbers (data) in Palo Alto App. - App version 6.1.1 & TA version 6.1.1 - Splunk version 7.2.9 - Data is being ingested from Syslog > UF to Splunk Cloud. - Data can be searched at Splunk from sourcetypes: pan:traffic, pan:system, pan:threat - Data model : pan_firewall is accelerated and built 100%. (there was no data in other datamodels so I  disabled the acceleration on them) one of the search query from dashboard : Network Security  | tstats summariesonly=t count FROM datamodel="pan_firewall" WHERE nodename="log.correlation" GROUPBY log.severity log.threat_category log.threat_name | rename log.* AS * | stats sum(count) AS count by threat_name threat_category severity *I'm wondering the field nodename (not found in the datamodel), is being used in many other panels' search query which might be causing the issue. If so, how to fix that? Please advise. Thanks
Hi have a new install with a single Splunk server for evaluation.  I set up the universal forwarder and the Splunk service on Centos and updated the PingFederate to create the require splunk audit fi... See more...
Hi have a new install with a single Splunk server for evaluation.  I set up the universal forwarder and the Splunk service on Centos and updated the PingFederate to create the require splunk audit file.  I then configured the receiver and the sender to use the /opt/pf/pingfederate/log/splunk_audit.log   Entries started to flow from the forwarder to the Splunk indexer but all the PingFederate App panes show "waiting for input".  From the search I see the data event flowing but they all say Splunk_Audit_Too_Small Any Tips how to fix this?   Thanks!
Hi, I'm using Add-on for Amazon Web Services version 5.0.0.  I have ingested ALB logs as described in https://docs.splunk.com/Documentation/AddOns/released/AWS/IncrementalS3. Now I could see the l... See more...
Hi, I'm using Add-on for Amazon Web Services version 5.0.0.  I have ingested ALB logs as described in https://docs.splunk.com/Documentation/AddOns/released/AWS/IncrementalS3. Now I could see the logs are being indexed. However, those events still not parsing correctly. still I could see only the raw logs. Is there anyone who could successfully parse the AWS ALB logs? I'm using Index cluster.  I have followed the below thread, though it is bit old. Still no luck.  https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-Add-on-for-Amazon-Web-Services-Why-are-ALB-Access-logs/m-p/231895#M25928
Hi Team, I wants to put my few Client servers in Splunk Monitoring from Console. Please assist.   
I built a simple TA using the Add-On Builder (v3.0.1) on Splunk 8.0.4.  Once thing I noticed is that the TA only pulls the first page of results and at the bottom of the pulled data are URLs for the ... See more...
I built a simple TA using the Add-On Builder (v3.0.1) on Splunk 8.0.4.  Once thing I noticed is that the TA only pulls the first page of results and at the bottom of the pulled data are URLs for the existing, next, and last pages. How do I configure this so that my TA iterates through all the pages?  I thought it would be using the checkpoint settings, but I'm not sure how to set that up.  I am not a developer so this endeavor has been a tough learning experience.  Splunk documentation lacks examples for these things. The API I am pulling from says I can use page[number] and page[size] but I don't know how to have the TA pull from 1 to last (6 in this case). Existing Setup REST URL:    https://api.website.com/${rest_endpoint}?page[number]=${page_number}   Event Extraction settings JSON path   $.data   Checkpoint settings Checkpoint parameter name: page_number Checkpoint field path: $.links.last Checkpoint initial value: 1 My Data   { "data":[ { "id":"52", "type":"report", "attributes":{ "name":"Bonorum", "description":"Lorem ipsum dolor sit amet.", "created_at":"2019-01-05T01:51:19.000Z" } }, { "id":"7", "type":"report", "attributes":{ "name":"Perspiciatis", "description":"Quia dolor sit amet.", "created_at":"2017-01-05T01:51:19.000Z" } } ], "links":{ "self":"https://api.website.com/reports?page%5Bnumber%5D=1", "next":"https://api.website.com/reports?page%5Bnumber%5D=2", "last":"https://api.website.com/reports?page%5Bnumber%5D=6" } }  
I am trying to get the following results for date, email and answer with the other data into separate rows: Results I am getting: Results I need to see: Search Query: index=someindex  | ... See more...
I am trying to get the following results for date, email and answer with the other data into separate rows: Results I am getting: Results I need to see: Search Query: index=someindex  | eval status=case(like(_raw, "%NO%"), "NO", like(_raw, "%YES%"), "YES") | lookup fall2020OnCampusStudents email OUTPUT class, name, ID, className, classNumber, college | search class!="" | table Date, name, email, ID, status, class, className, classNumber, college | sort college, email, class | rename email AS "Email", status AS "Answer", class AS "Classes", className as "Class Name", classNumber as "Class Number", college as "College" I have tried using mvexpand, but it will only take the first line of each field. I am still trying to understand other techniques, but still learning.     
Is there any point in getting tools.jar into the classpath for Corretto 8 installations w/AppDs agent? In our AWS Beanstalk automation for instances using OpenJDK Java 1.8, we had a bit of machine... See more...
Is there any point in getting tools.jar into the classpath for Corretto 8 installations w/AppDs agent? In our AWS Beanstalk automation for instances using OpenJDK Java 1.8, we had a bit of machinery extending classpath to include the JDK's tools.jar (with the understanding that that was required for some of AppD's more advanced features). But https://docs.appdynamics.com/display/PRO45/App+Server+Agents+Supported+Environments#AppServerAgentsSupportedEnvironments-jvm-supportJVMSupport says that at least some advanced features are not available with Corretto 8/11. So does AppD not utilize Corretto 8 JDK's tools.jar?  If so, how, and for what? Or is it of no use there now, and we can stop jumping through classpath hoops? Thanks for any insights.
Hi there, I'm facing an interesting problem with fairly complex logs consisting of one or more xml namespaces, some of which have JSON objects embedded in them. I have this working fairly reliably us... See more...
Hi there, I'm facing an interesting problem with fairly complex logs consisting of one or more xml namespaces, some of which have JSON objects embedded in them. I have this working fairly reliably using a transform-based field extraction and  SPL (the logs are all over the place, separate problem!) but I was wondering if anyone can suggest how to do this automatically using props and transforms without having to use spath in a base search to do all the finagling. transform extraction: REGEX=<([:\w+)>([^<]+) FORMAT=$1::$2 This gets out all the xml tags, which generally have the format tns:SomethingOrOther, and associated values. The json bit is usually in tns:Payload SPL: index=TheIndex sourcetype=theSourcetype | spath input=tns:Payload This is reliable enough but I'd like to recommend improvements to the site. The more general question here is, where you have complex logs of this type, how can you configure props/transforms to do the right operations and in the right sequence? i.e. xml first, then json extraction. If you try the other way round, it wouldn't work--would it? If you had the opposite problem, xml embedded in json, how would you do that? Is it even possible to control the order in which props attempts structured extractions? Certainly props supports extracting both types of values, but how do you know which one it tries first if you configure both? Props seems to assume you're only looking at one type of thing per sourcetype, not the case here--unless I've missed something. Of course if props simply executes them in order encountered in the file, there isn't much of an issue. However I have some constraints here which prevent me from just experimenting with it: --no access to sourcetype configuration, nor am I likely to get it --cannot download any of the data, which is commercial in confidence and the site take this very, very seriously --no dev or test environment (!) Strange but true for an enterprise of this size and prominence.    
I have this installed and when I add it to a dashboard, it "forces" me to add a "search" to the panel. The docs say 3- Enter any generating command as a search text (e.g |makeresults ) what does th... See more...
I have this installed and when I add it to a dashboard, it "forces" me to add a "search" to the panel. The docs say 3- Enter any generating command as a search text (e.g |makeresults ) what does that mean?  The manual "export to pdf" works, so i don't understand what this "Generating command" should be, especially if there isn't one specified in the docs.
I have a dashboard that is set up to do some conditions based on a change from an input and a date picker, however, this doesn't work when someone links into the dashboard and it pre-fills the inputs... See more...
I have a dashboard that is set up to do some conditions based on a change from an input and a date picker, however, this doesn't work when someone links into the dashboard and it pre-fills the inputs since Splunk treats that as the "initial value" and not a change. I don't want to set anything as the initial value because that will trigger the searches to fire off when when there’s nothing there, I.e. Someone doesn't link. So id like to keep the XML rejecting not to waste resources. Is there a way that I can set the tokens, perhaps in the <init> from the URL paramaters? I tried $form.field$ and that didn't seem to do anything. Thanks!
I have two individual stats searches that return a single value each. How can I combine the two to get a ratio? The index is basically a table of Transaction IDs. There can be multiple entries for a... See more...
I have two individual stats searches that return a single value each. How can I combine the two to get a ratio? The index is basically a table of Transaction IDs. There can be multiple entries for an ID. For example Transaction ID Status txn1 200 txn1 500 txn2 200 txn3 200   Search #1 tells me the number of transactions that ended in an error by looking at the last Status of a transaction ID: baseSearch | stats latest(status) as lastTxnStatus by txn_id | where lastTxnStatus >= 500 | stats dc(txn_id)    Search #2 tells me the total number of transactions: baseSearch | stats dc(txn_id)   I want to get a mathematical result of: 100 * Search #1 / Search #2. How can I do that? The trouble I'm having is with the "where" command in Search #1 - that complicates everything. Using the data in the table above, the result would 33.3333% (i.e. 100 * 1/3).
Hello, from splunk data analysis I see some duplicate events being logged few days back and can't search the same data now, I need to write splunk query that will look for same events logged multiple... See more...
Hello, from splunk data analysis I see some duplicate events being logged few days back and can't search the same data now, I need to write splunk query that will look for same events logged multiple times and report the stats, what is the query that can be used to do this for duplicate events analysis  and summary of such events and counts
We’re encountering an odd issue with regards to how our Splunk Dashboard query is passing values to the JavaScript code being invoked within the dashboard’s source code. To provide you with an examp... See more...
We’re encountering an odd issue with regards to how our Splunk Dashboard query is passing values to the JavaScript code being invoked within the dashboard’s source code. To provide you with an example, our dashboard SPL query is generating the correct values. Here is a sample output: Transaction Name June 2020 - Count July 2020 - Count August 2020 - Count Employee Batch 2400 3200 2900 Supervisor Batch 2100 2800 2500   However, the JavaScript file is correctly picking up all the values except for the last column’s values, which it receives as a value of “0” from the Splunk query. Transaction Name June 2020 - Count July 2020 - Count August 2020 - Count Employee Batch 2400 3200 0 Supervisor Batch 2100 2800 0   This results in the dashboard displaying the values captured by the JavaScript code, which makes the last column display the incorrect data. Transaction Name June 2020 - Count July 2020 - Count August 2020 - Count Employee Batch 2400 3200 0 Supervisor Batch 2100 2800 0   There does not appear to be any hard-coding of values in the SPL query, so we’re rather confused as to how or why the JavaScript code would misinterpret the values of just one column, while correctly interpreting the values of all the other columns. Would you have any ideas on how to proceed here? We tried researching this issue online but were unable to identify any helpful information.
I have a dashboard that generates a table that I would like to add the ability to jump into search from the table on the dashboard. We have hundreds of TB of data a day in the index so id like for it... See more...
I have a dashboard that generates a table that I would like to add the ability to jump into search from the table on the dashboard. We have hundreds of TB of data a day in the index so id like for it to limit the timeframe down to +/- 30m of the timestamp that I have. So if the timestamp of the event is 8:21pm I want to make the search be something like ``` index=index field=field earliest=(timestamp-30m) latest=(timestamp+30m) ``` How could I achieve this via the dashboard XML?   Thanks!