All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, i have the following search:   index=_internal source=*license_usage.log* type="Usage" idx=example | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s... See more...
Hello, i have the following search:   index=_internal source=*license_usage.log* type="Usage" idx=example | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1h | stats sum(b) as b by _time, host, pool, s, st, h, idx | search host=server1 | timechart span=1h sum(b) AS volumeB by idx fixedrange=false | eval date_wday=lower(strftime(_time,"%A")) |where NOT (date_wday="saturday" OR date_wday="sunday") | fields - date_wday | eval tacp=round('example'/1024/1024/1024, 3) | stats avg(example) as avg latest(example) as Event | eval limit=(avg+((avg/100)*20))     All i want to do is... The search should be run every hour and then it look back for the 30 Days for this special hour. Like: Search starts at 12, every event at 12 for the last 30 days will be summarized and a avg will be made. Then the avg gets plus 20%, this is my limit. After the limit is calculated the alert will look, if the limit is lower than the actual size for the actual day. If not an E-Mail will be send. Thank you for your comments. Please help  
Hi Splunk Experts, Suppose I only have splunk cloud.  Is it NOT possible to set an alert based on a search that correlates events from multiple systems? Such as correlating an event across endpoint ... See more...
Hi Splunk Experts, Suppose I only have splunk cloud.  Is it NOT possible to set an alert based on a search that correlates events from multiple systems? Such as correlating an event across endpoint and network activity.  Is this where Enterprise Security is needed? Or is the answer that one can technically do it without Enterprise security but it would be tougher? Would splunk enterprise have more out of the box correlations than just splunk cloud?  
Goal - I am searching for  "number of actions per unique customer" metrics from API metric logs. below is my query. Below query is filtering results by providing specific request.path and then get... See more...
Goal - I am searching for  "number of actions per unique customer" metrics from API metric logs. below is my query. Below query is filtering results by providing specific request.path and then getting stats by Customer_Id and _time.  and then getting the total count as uniqueCustomers and sum up those counts so that it will get the totalActions and the res is actually diving totalActions/uniqueCustomers   index="some_index" sourcetype="api-index" message="Metrics Information" | rex field=request.path "/v1/actions-api/(?<Customer_Id>\w+)" | stats count by Customer_Id, _time | eventstats count as uniqueCustomers | eventstats sum(count) as totalActions | eval result = totalActions/uniqueCustomers     The problem is that I don't know how to count these result per day. I tried below but isn't working because it has two BY clauses. Please suggest some solutions. Thank you   | bin _time span=1d    
hi all, in my original search im getting data by folloing command:   | stats range(_time) as timetaken by CorrelationID| stats count as Total, avg(timetaken) as AvgResponseTime, perc95(timetaken) a... See more...
hi all, in my original search im getting data by folloing command:   | stats range(_time) as timetaken by CorrelationID| stats count as Total, avg(timetaken) as AvgResponseTime, perc95(timetaken) as P95ResponseTime   but now, i want this data on hourly basis. so i tried the following script:    |bin _time span=1d |eval Time=strftime(_time , "%d/%m/%Y %H:%M")| stats range(_time) as timetaken by CorrelationID| stats count as Total, avg(timetaken) as AvgResponseTime, perc95(timetaken) as P95ResponseTime by Time    but this gives me 0 value. i'm seeking for the right way to get data on hourly basis. 
Hi, Would like to use artifact_offset in https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Loadjob to troubleshoot why alert-when-rises-by-issue (http://answers.splunk.com/answers/... See more...
Hi, Would like to use artifact_offset in https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Loadjob to troubleshoot why alert-when-rises-by-issue (http://answers.splunk.com/answers/6843/alert-when-rises-by-issue) does not seem to work for a savedsearch alert. The max artifact_offset seems to be only 3 for that alert. Not able to find a way to increase it to a high value. It would be nice to have an Advanced_Edit option for updating the artifact_offset of a savedsearch
I have a requirement where I want to show the timechart of 5xx errors percentage by total request. currently I have index=cgn http_status=5*|timechart count  this gives me timechart as  but t... See more...
I have a requirement where I want to show the timechart of 5xx errors percentage by total request. currently I have index=cgn http_status=5*|timechart count  this gives me timechart as  but this does not gives me the real picture as how the backend node doing. so I need to change the chart to percentage of 5xx errors over total request so that I can find out how big the issue is. Any help?
I have the following string:   "userEmail":"someString/ab-cde-fgh-2020.domain.com@DOMAIN.COM" ABC DEF, "userAddress":"otherString/ig-klm-nop-2020.domain.com@DOMAIN.COM" HIG KLM, "userEmail":"so... See more...
I have the following string:   "userEmail":"someString/ab-cde-fgh-2020.domain.com@DOMAIN.COM" ABC DEF, "userAddress":"otherString/ig-klm-nop-2020.domain.com@DOMAIN.COM" HIG KLM, "userEmail":"someOtherString/ab-cde-fgh-2020.domain.com@DOMAIN.COM" ABC DEF,   from which I want to extract the "ab-cde-fgh-2020.domain.com" part but only from the "userEmail" tag. The regex on regex101 works, however on Splunk Search trying to use the expression:   | rex "(?<user>(?<="\"userEmail"\"\:\".*)(?<=\/)(.*?)(?=\@))"   it gives me the error that "lookbehind assestions is not fixed length", while the following:   | rex "(?<user>(?<=\"userEmail"\"\:\").*(?<=\/)(.*?)(?=\@))"   Returns:   someString/ab-cde-fgh-2020.domain.com someOtherString/ab-cde-fgh-2020.domain.com   as one would expect. However the strings in the position of "someString" or "someOtherString" could be of any length in my data. What could be a workaround on that issue?
Got an interesting dilemma. I work for a large worldwide company. Each site around the world has their own unique set of indexers/clustered search peers for their own domain. I.e. https://lesplunk.le... See more...
Got an interesting dilemma. I work for a large worldwide company. Each site around the world has their own unique set of indexers/clustered search peers for their own domain. I.e. https://lesplunk.lehi.micron.com:8000/ - for the Lehi, UT, USA site. However, we have some ingestion which is queried by a centralized/worldwide indexer/search head cluster. I would like to create a dashboard that can run a set of searches local to my domain (Lehi Splunk environment) but also display a single report from a search that must be performed on the worldwide domain (ww Splunk environment). I’ve tried to use iframe in a panel to get the correct report to show, but I may be messing this one up.  The biggest issue I have appears to be regarding the access to the web sites: [web address] refused to connect.  Even tried to iframe in www.google.com to see if I could get it to display. I used this code snippet from another community page reference to test out the iframe feature:     <panel>       <html>         <html>         <h2>Example iframe of a dashboard</h2>         <p>Uses display controls via the http get param</p>         <code>           <![CDATA[<iframe src="/app/simple_xml_examples/simple_display_controls_example?hideChrome=true&hideEdit=true">]]>         </code>         <iframe src="/app/simple_xml_examples/simple_display_controls_example?hideChrome=true&amp;hideEdit=true" width="100%" height="400" border="0" frameborder="0"/>       </html>       </html>     </panel> I know that each instance of Splunk at each of the sites has its own set of login credentials, so I assume that the issue is related to the Lehi Splunk credentials being passed to run the report at the ww Splunk site. I found this article "Search across multiple indexer clusters" from docs.splunk.com: https://docs.splunk.com/Documentation/Splunk/8.1.0/Indexer/Configuremulti-clustersearch - This one says that we could setup our search heads to hit the ww Splunk ingested data, but I think it would apply to ALL sarches from the Lehi Splunk environment and would apply to ALL data that has been ingested in the WW Splunk environment (which I do not want).  If this is not how this works, please correct my ignorance.  Or if there is  way to setup a single dashboard to work in this fashion, please share. Using the help ffrom this community question "Feature Request: How to embed a dashboard (not a report) that is updated every hour in a webpage?" (https://community.splunk.com/t5/Dashboards-Visualizations/Feature-Request-How-to-embed-a-dashboard-not-a-report-that-is/td-p/198028), I tried this: <iframe src="https://wwsplunk-vip/en-US/app/search/orion_client_percent_availability?username=viewonlyuser&amp;password=viewonly&amp;return_to=/app/MES/orion_clone" frameborder="no" scrolling="no" width="1200" height="500" title="ORION Client Percent Availability" ></iframe> Still nothing, but I did not get the "[web address] refused to connect" error, so I think this might be closer. I've read through quite a few community pages looking for answers, but if this sounds familiar or you have a page I may have missed, simply direct me to the link. Any suggestions for what to try next?  Anyone ever had a similar issue?  Any simpler method to insert a report/dashboard from one Splunk server into a dashboard panel of another Splunk server (not in the same cluster)? If I can provide any more details to help make this clearer, I will happily reply with additional details.
Hi Everyone, I'm newer-ish to splunk.  I'm doing a search similar to this in splunk : index=mfa sourcetype=lexus Subcategory="Delivery Method". With the search results, I want to do stats count by ... See more...
Hi Everyone, I'm newer-ish to splunk.  I'm doing a search similar to this in splunk : index=mfa sourcetype=lexus Subcategory="Delivery Method". With the search results, I want to do stats count by action, but It brings back results similar to this(see below), with each action having a different phone number. How do I get stats only on the wording "User selected text Deilvery"? and not having 1 stat for every phone number.  There are 100 actions with the different phone numbers. I just want a count by User selected text delivery.    "User selected text delivery to ***-***-****"   I hope this makes sense. I'll gladly provide more info if needed. i'm just pretty new to this, and looking for some help.   Kevin
Hi, I'm having to deploy Splunk for the very first time having never worked with it before and struggling to understand some of the basic architecture.  I have been tasked to setup Splunk ES and UB... See more...
Hi, I'm having to deploy Splunk for the very first time having never worked with it before and struggling to understand some of the basic architecture.  I have been tasked to setup Splunk ES and UBA for what will be a maximum of 20 users, a handful of forwarders and currently 6 data sources (expanding to 10). 1st question I wanted to ask is related to UBA, if I setup a single master node to begin with, can I expand that later down the line to 3 nodes? (to cater for the 10 data sources) and is there anything I need to be aware of before doing so, to take this into consideration for the future when the remaining data sources come online?    I am now going to ask a daft question -  in the topology of a small enterprise deployment, does a indexer equate to a single server with an instance of Splunk Enterprise? For our setup , how do I determine the number of indexers required? Can someone just explain to me the number of physical servers (VM's) required in this type of scenario, it would be of great help?     Appreciate any help in my understanding of this as I am losing sleep over it.   
Dear all, for using the Add-on my customer requires me to use client certificates for authentication. Using the proposed client secret is not the favorite implementation. Getting the certificate p... See more...
Dear all, for using the Add-on my customer requires me to use client certificates for authentication. Using the proposed client secret is not the favorite implementation. Getting the certificate pair of public and private part is not a probloem, but can someone tell me how to set this up for the add-on. Where do I have to change the config and the scripts to get it working. Any help appreciated. Thanks Carsten  
All,  Is there any guidance like we get with security inputs (osnix, osnixscript, osnixbash) for the Splunk_TA_nix metric inputs? was going to go with osnixmetrics but if there is something professi... See more...
All,  Is there any guidance like we get with security inputs (osnix, osnixscript, osnixbash) for the Splunk_TA_nix metric inputs? was going to go with osnixmetrics but if there is something professional services prefers I wanted to make sure that is what I used.    thanks, 
How can I change aws app scheduled searches to use aws data in index aws_data instead of default (main)? I have Splunk Cloud environment I am configuring the aws add-on for splunk inputs to place d... See more...
How can I change aws app scheduled searches to use aws data in index aws_data instead of default (main)? I have Splunk Cloud environment I am configuring the aws add-on for splunk inputs to place data in an index called aws_data rather than have it go to default (main) index... I see data in my aws_data index but splunk app doesn't seem to see it with its scheduled searches... Do i need to sitch the data index back to default (main) for the app to properly process the data in its dashboards or can I change which index the schedule searches use using the CIM tool or other methods?? Thank you Rich
I inherited a Splunk deployment and I trying to understand the config I see and what is referenced in the docs.  The deployer app.conf lists "deployer_push_mode = default" and I am wondering if that ... See more...
I inherited a Splunk deployment and I trying to understand the config I see and what is referenced in the docs.  The deployer app.conf lists "deployer_push_mode = default" and I am wondering if that means merge_to_default?  Would it be best practice to change this to merge_to_default from default?
I'm running Splunk Universal Forwarder v8.0.3.0. We are running it on Windows 2012 R2.  What is the process to replace the self-signed certificate Splunk creates and uses by default? We want to use a... See more...
I'm running Splunk Universal Forwarder v8.0.3.0. We are running it on Windows 2012 R2.  What is the process to replace the self-signed certificate Splunk creates and uses by default? We want to use a certificate provided by our internal Enterprise Certificate Authority (CA). We do not want to use any self-signed certificates. We scan our environment, and any self-signed certificates are listed as vulnerabilities within our network. There are several blogs mentioning create web.config or updating the server.conf file; placing certificates in a non-existing ${SplunkHome}\etc\auth\mycerts directory. Have to assume mycerts has to be created. Here is my request: Given that I can create a certificate-pair (private, public, w/keys) in pem format from our CA; I need a step-by-step, in-order, process on how to make it all work: where to place the certificate(s), what precise file to update, what information is needed in that file (relative to where we place the certificates). We do not control any of the Deployment Server or Heavy Forwarders. I only work with the Universal Forwarder. Our data is sent to those other servers. If there is any dependency/change needed on these servers, when we change our certificate, what are they? We are currently connected to those servers. Again, please no tidbit pieces of information, but the ordered list of instructions to make this work. If there is a link with this detail, post it, cause I have not found anything bulletproof. I saw one item, then a person mentioning a problem trying to implement that item. When I read it, it had many assumptions you had to make in it, like creating the mycerts sub-directory. There was others. Not a good process.  Thanks,
How to create a search from raw logs " I want to extract a srtring in the format " severity" = "40." from raw logs
How can i get data from Mcafee ePo directly to splunk ? i see that there is an Add on for MacAfee but that required syslog configuration over tls, which im having issue configuring 
Hi all, I am new to Splunk and I am unable to complete the installation in CentOS 8. Error: Unable to start the web interface. (Reference image attached) Please help. Thanks in advance. Best reg... See more...
Hi all, I am new to Splunk and I am unable to complete the installation in CentOS 8. Error: Unable to start the web interface. (Reference image attached) Please help. Thanks in advance. Best regards, Sujith Thyagaraja
Hey All, Having issues getting data in.  With the inputs monitor stanza only data comes thru but when I add the props to do indexed time field extractions data stops coming all together.  No errors ... See more...
Hey All, Having issues getting data in.  With the inputs monitor stanza only data comes thru but when I add the props to do indexed time field extractions data stops coming all together.  No errors are seen in _internal, anyone got some ideas?  Inputs.conf [monitor://C:\file\data\] whitelist = file1.csv$ index = file1 sourcetype = file1 disabled = false Props.conf [file1] INDEXED_EXTRACTIONS = CSV FIELD_DELIMETER=, FIELD_QUOTE=" HEADER_FIELD_DELIMETER=,
Hi everyone!   The Splunk Phantom team is looking for your feedback in this quick 10 minute survey.   20 lucky winners will be randomly selected to win a $25 gift code to the Splunk store to pu... See more...
Hi everyone!   The Splunk Phantom team is looking for your feedback in this quick 10 minute survey.   20 lucky winners will be randomly selected to win a $25 gift code to the Splunk store to purchase Splunk swag! Each survey completion is an entry, limit 1 per person, and only US participants can be entered in the sweepstakes but everyone can participate in the survey.   Please review all the official rules here. Good luck and thank you for your time!