All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I installed splunk on a centos 8 machin   firewall-cmd allowed port 8000, 8089, 80, 443, 9997 etc.   i can login to splunk web interface  http://127.0.0.1:8000  and http://10.10.10.211:8000  (ip ... See more...
I installed splunk on a centos 8 machin   firewall-cmd allowed port 8000, 8089, 80, 443, 9997 etc.   i can login to splunk web interface  http://127.0.0.1:8000  and http://10.10.10.211:8000  (ip of installation machine)    every things works fine inside installation machine, and i am collecting logs from all the network machines,   but   when i try to login remotely from another windows machine by http://10.10.10.211  it dosent work and connection timeout   i tried to remotely telnet to port 8000 but connection time out   i can ping the splunk machine and every thing seems fine,   i stopped fiewalld service but nothing changed i added allowRemoteLogin=always to server.conf and restarted splunk and nothing changed why i cannont login to splunk web interface from another machine????
Hi, How can I set an alert with firing setted to send an email to me. But when It fires on the mail it has to show me a dashboard previous created is it possible? otherwise in the mail alert has t... See more...
Hi, How can I set an alert with firing setted to send an email to me. But when It fires on the mail it has to show me a dashboard previous created is it possible? otherwise in the mail alert has to show me the result of it. In this case how the searching needs to be setted?
Greetings, I have saved search (alerts) like this: I know that each of these saved alert which is alert can be visualized dependently. Is it possible to combine these saved search (alert) in... See more...
Greetings, I have saved search (alerts) like this: I know that each of these saved alert which is alert can be visualized dependently. Is it possible to combine these saved search (alert) in one visualization and one search?  For example: 1. I want to make a search that contains count from all triggered alert (all saved search), the table or statistic should look like this  Field                                          |    triggered count | Saved Search 1 (Alert 1)  |               3                   | Saved Search 2 (Alert 2)  |               4                   | and so on. 2. From that search, I want to visualize it, like a pie chart, so it will like this: Saved Search 1 = 20% Saved Search 2 = 30% and so on. Thanks in advanced
Hey guys, Recently I found an issue when using the AppDynamics API to retrieve metrics or events on a LITE account. I know about the restriction on the last 1440 minutes and I comply with it by requ... See more...
Hey guys, Recently I found an issue when using the AppDynamics API to retrieve metrics or events on a LITE account. I know about the restriction on the last 1440 minutes and I comply with it by requesting data for less that. However the API still returns the 400 error with body: "HTTP Status 400 - For LITE Account only last 1440 minutes of data could be extracted" It is weird because I request less than the limit, for example the request that I use is: GET /controller/rest/applications/9077/metric-data?output=JSON&metric-path=Overall+Application+Performance%7C*&time-range-type=BETWEEN_TIMES&start-time=1596446430281&end-time=1596446434281&rollup=false HTTP/1.1 The time range defined between the start-time and end-time parameters is only a few seconds and it is just a minute in the past relative to the time of initiating the request, so it fits in the limit of last 1440 minutes, however I still have the error... If I try to request data using the BEFORE_NOW parameter, like: GET /controller/rest/applications/9077/metric-data?output=JSON&metric-path=Overall+Application+Performance%7C*&time-range-type=BEFORE_NOW&duration-in-mins=10&rollup=false HTTP/1.1 I get successful response. BUT I don't see anywhere on the documentation that the  I did look on the documentation and I don't BETWEEN_TIMES time range type is not supported on LITE accounts. So bottom line the questions is why the BETWEEN_TIMES range type does not work on my LITE account?
I want to set one token value which includes another token's value "dc_no_tok". But its not picking value of "dc_no_tok". I am not sue what I am doing wrong here? I am passing "dc_no_tok"  from ano... See more...
I want to set one token value which includes another token's value "dc_no_tok". But its not picking value of "dc_no_tok". I am not sue what I am doing wrong here? I am passing "dc_no_tok"  from another dashboard to this dashboard and I can see that token value is getting posted in url. e.g: https://abc/en-US/app/search/dc2_consul_level2_errors_test?form.host_tok=consul_server&form.dc_no_tok=2  XML: ================= <input depends="$alwaysHide$" type="dropdown" token="host_tok" searchWhenChanged="true"> <label /> <change> <condition value="consul_client"> <set token="Panel1">search host!=*consul* OR servername!=*consul* AND (host=pc$dc_no_tok$* OR servername=pc$dc_no_tok$* OR host=sc$dc_no_tok$* OR servername=sc$dc_no_tok$*) earliest=-5m sourcetype=consul_log index=hcm_consul "[ERROR]" NOT ("Newer Consul version available") | eval ERROR=case(like(_raw, "%Push/Pull with%"), "Push/Pull Error", like(_raw, "%Failed fallback ping%"), "Failed fallback ping Error", like(_raw, "%connection reset by peer%"), "Connection reset by peer Error", like(_raw, "%keepalive timeout%"), "Keepalive Timeout Error", like(_raw, "%i/o timeout%"), "I/O Timeout Error", like(_raw, "%lead thread didn't get connection%"), "Lead thread didn't get connection Error", like(_raw, "%failed to get conn: EOF%"), "Failed to get conn: EOF Error", like(_raw, "%rpc error making call: EOF%"), "RPC error making call: EOF Error", like(_raw, "%Timeout exceeded while awaiting headers%"), "Timeout exceeded while awaiting headers Error", like(_raw, "%rpc error making call: Permission denied%"), "RPC error making call: Permission denied Error", like(_raw, "%Permission denied%"), "Permission denied Error", true(), "Other Error")| stats count by ERROR</set> <set token="Panel2">host!=*consul* OR servername!=*consul* AND (host=pc$$dc_no_tok$$* OR servername=pc$dc_no_tok$* OR host=sc$dc_no_tok$* OR servername=sc$dc_no_tok$*) earliest=-60m sourcetype=consul_log index=hcm_consul "[ERROR]" NOT ("Newer Consul version available")| chart count by host | trendline sma5(foo) AS sm_count</set> </condition> <condition value="consul_server"> <set token="Panel1">host=ss$dc_no_tok$consul* OR servername=ss$dc_no_tok$consul* earliest=-5m sourcetype=consul_log index=hcm_consul "[ERROR]" NOT ("Newer Consul version available") | eval ERROR=case(like(_raw, "%Push/Pull with%"), "Push/Pull Error", like(_raw, "%Failed fallback ping%"), "Failed fallback ping Error", like(_raw, "%connection reset by peer%"), "Connection reset by peer Error", like(_raw, "%keepalive timeout%"), "Keepalive Timeout Error", like(_raw, "%i/o timeout%"), "I/O Timeout Error", like(_raw, "%lead thread didn't get connection%"), "Lead thread didn't get connection Error", like(_raw, "%failed to get conn: EOF%"), "Failed to get conn: EOF Error", like(_raw, "%rpc error making call: EOF%"), "RPC error making call: EOF Error", like(_raw, "%Timeout exceeded while awaiting headers%"), "Timeout exceeded while awaiting headers Error", like(_raw, "%rpc error making call: Permission denied%"), "RPC error making call: Permission denied Error", like(_raw, "%Permission denied%"), "Permission denied Error", true(), "Other Error")| stats count by ERROR</set> <set token="Panel2">host=ss$dc_no_tok$consul* OR servername=ss$dc_no_tok$consul* earliest=-60m sourcetype=consul_log index=hcm_consul "[ERROR]" NOT ("Newer Consul version available")| chart count by host | trendline sma5(foo) AS sm_count</set> </condition> </change> <choice value="consul_client">Client</choice> <choice value="consul_server">Server</choice> <default /> </input> =================
I am getting the following error while splunk tries to raise an alert as a Resilient Incident. __main__ - ERROR - Alert action failed to create Resilient incident! If anyone else has run into a sim... See more...
I am getting the following error while splunk tries to raise an alert as a Resilient Incident. __main__ - ERROR - Alert action failed to create Resilient incident! If anyone else has run into a similar problem, it would be of great help if you shared the steps taken to resolve it. Or help me by pointing in a direction in which I can focus my troubleshooting
I have required where the CEF comes as URL and I need just a part of the URL to pass as input(ARTIFACT.CEF.URL) to action in Splunk phantom. I am using the phantom version 4.8. Can someone suggest me... See more...
I have required where the CEF comes as URL and I need just a part of the URL to pass as input(ARTIFACT.CEF.URL) to action in Splunk phantom. I am using the phantom version 4.8. Can someone suggest me an idea on How I can just pass the part of the URL instead of the complete URL?
Hi all, I'm currently facing an issue where i migrated the /etc/users/* from a standalone ES SH into the deployer's shcluster/apps/ to be deployed to the 3 SHC members. Also the authentication.conf... See more...
Hi all, I'm currently facing an issue where i migrated the /etc/users/* from a standalone ES SH into the deployer's shcluster/apps/ to be deployed to the 3 SHC members. Also the authentication.conf in the system/local for the SHC members are set up for LDAP mapped authentication. After deploying and reloading authentication, the SHC(3) members does not* reflect the same amount of users on the individual search heads.  Is this error a bug or any reason why the Users list is not populating correctly from the ldap
Hello Everyone! I have to generate a time chart for a calculated average with below sample query.    Sample Query:  |streamstats ..... by id |where.... |eval ... |eval.... |stats  min(diff) a... See more...
Hello Everyone! I have to generate a time chart for a calculated average with below sample query.    Sample Query:  |streamstats ..... by id |where.... |eval ... |eval.... |stats  min(diff) as value1 by id |stats  avg(value1 ) as valueA  count  as Total   I would like to have a time chart for the  'valueA'  with time. I tried various ways to infuse a chart but was not able. Please let me know how to create a chart for the average over time.  
RHEL7, Splunk/forwarder v8.0.4 I'm setting up a distributed installation (1x head, 2x indexer). There's been quite a bit of back and fourth, troubleshooting. When running 'splunk restart' 2 of 3 ma... See more...
RHEL7, Splunk/forwarder v8.0.4 I'm setting up a distributed installation (1x head, 2x indexer). There's been quite a bit of back and fourth, troubleshooting. When running 'splunk restart' 2 of 3 manages to start up the web interface as desired, with the correct CA showing up in the browser. For the remaining one, the config file /opt/splunk/etc/system/local/web.conf looks identical on them. Another config file, ~/etc/system/local/server.conf, is similar, with serverName, and the hashed pass4SymmKey and sslPassword being different. This is also using the .pem file as serverCert. Rather than the decrypted .key file, the server.conf file is running of the encrypted one (in .pem format), but sslPassword being supplied in the [sslConfig] section. My current question is, what configuration files affects the web interface? When the web interface is up (and the second indexer hopefully shows up in 'splunk show cluster-bundle-status', replication and data integrity would be next, before in the end, having all forwarders show up. I have a feeling/hope all the current issues are related to me messing up SSL stuff. If this is the wrong place to ask/post this, I do apologize.
. Eg:- R06=Tue 24 Mar 2020,Wed 10 Jun 2020 ,First_Date = Tue 24 Mar 2020, Second_Date = Wed 10 Jun 2020 then compare those date into the Verifed_Date taking verified date as present date Date Compa... See more...
. Eg:- R06=Tue 24 Mar 2020,Wed 10 Jun 2020 ,First_Date = Tue 24 Mar 2020, Second_Date = Wed 10 Jun 2020 then compare those date into the Verifed_Date taking verified date as present date Date Compare:- if(First_Date.before(Verifed_Date ) consider the value as 1, else if(First_Date.equals(Verifed_Date ) consider the values as 0, else consider the r value as 2; if D2_ExecutionDate is null or empty mean verified should be null . a). (Date Compare(Verifed_Date ,First_Date == 0 || Verifed_Date ,First_Date == 1) && (Verifed_Date ,Second_Date == 2 || Verifed_Date, Second_Date == 0)) get the verified values from PhaseMapping verified = R06.1 can you please help me to write query for Datecompare and a).I tried with eval command..
1. If the same JobName field name is already exists,Trying to get average of column value of JobName's elapsedtime value 2. With the average value,I should compare the today's field name with previo... See more...
1. If the same JobName field name is already exists,Trying to get average of column value of JobName's elapsedtime value 2. With the average value,I should compare the today's field name with previous date field name's value. Eg: JobName: 'script_run'   ElapsedTime: 5 -> Here I need to check whether the 'Script name' field name is already existing( In previous Date) or not. If exists, Compare the elapsedtime '5' with that yesterday or any previous day value of  same 'script_run'. If the today's elapsed value is greater than previous day's value, I need to highlight in report. So iterating the today's each JobName with each jobName of previous day to compare the values of each elapsed time of each JobName is needed here. JobNames are also different. so no any match case is not required here. Getting the JobNames with field name   index="application_**" host="W4Q**" sourcetype="log" [search index="application_**" host="W4Q**" sourcetype="log" earliest=-1d@d latest=now | top JobName,SecondsElapsed | table JobName,SecondsElapsed] | stats avg(SecondsElapsed) as Duration by JobName,SecondsElapsed | stats count(eval(if(SecondsElapsed > Duration,true,0))) as LongRun by JobName  | foreach *     [eval <<FIELD>>_VAL=if('<<FIELD>>'>Duration,'true',null())]   Any help for this.   Thanks
Hi, If I create a field extraction in the context of App1 and set the permissions as Global and give Everyone read permissions, should the fields be visible in searches in App2? I had this situatio... See more...
Hi, If I create a field extraction in the context of App1 and set the permissions as Global and give Everyone read permissions, should the fields be visible in searches in App2? I had this situation recently and nothing I did could change anything until I moved the field extraction from App1 to App2 and then the user who only can access App2 was able to see the fields in search. Is there somewhere where all this is clearly documented? Cheers, Jeremy.
Seems to be an odd issue when using tokens to search an entire csv file. I don't know if this is built into splunk on purpose but......   Example:  stuff.csv file contains :  Equipment Locati... See more...
Seems to be an odd issue when using tokens to search an entire csv file. I don't know if this is built into splunk on purpose but......   Example:  stuff.csv file contains :  Equipment Location_within_building Street Price CI_Number TV stand first floor 1404 Bay street $29.99 121 Wireless headset second floor 1404 Bay street $39.99 223 Laptop second floor 29th Bay Pl $999.99 3334 Wireless Microphone first floor 404 Bay Street $9.99 5552   And the user has an input text box  where they can search the entire CSV file. with a token of :  *$Item$* The issue is that IF a user  put into the text box  1404 Bay street . They won't get any results back. However if  a user puts in 1404 , they will get results back...... is there  anyway to modify the search query (within a statistics table ) of  : " |inputlookup stuff.csv| search Equipment=*$Item$* OR Location_within_building = *$Item$* OR  Street = *$Item$*  OR Price = *$Item$* OR CI_Number = *$Item$*    so that if  a user puts in 1404     or    1404 Bay    or  1404 Bay street    they would get a return of :  Equipment Location_within_building Street Price CI_Number TV stand first floor 1404 Bay street $29.99 121 Wireless headset second floor 1404 Bay street $39.99 223   instead of getting nothing as a return value when searching the entire csv file? 
Hi Guys,  (Please see attached file for better understanding) I need help adjusting my query to show the below results: I want to put all the (software, brand and product) in one single row with ... See more...
Hi Guys,  (Please see attached file for better understanding) I need help adjusting my query to show the below results: I want to put all the (software, brand and product) in one single row with the total number of their count under “count”  Example, instead of having multiple Mac, Window etc… we should just have  one row with the total count   Current table Software Brand Product Number of count Mac Apple MTBNUYE2V0 1 Mac Apple MTBNUYE2V1 1 Mac Apple MTBNUYE2V2 1 Mac Apple MTBNUYE2V3 1 Mac Apple MTBNUYE2V4 1 Mac Apple MTBNUYE2V5 1 Mac Apple MTBNUYE2V6 1 Mac Apple MTBNUYE2V7 1 Mac Apple MTBNUYE2V8 1 Mac Apple MTBNUYE2V9 1 Mac Apple MTBNUYE2V10 1 Mac Apple Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2 Window Win.x Youbest 2   Example of expected result: Software Brand Version Number of count Mac Apple 20.20.20  200 Windows Win.x 30.90.09 320  Data is from: index=product sourcetype=my_product   Then when we click on the Number of count it should open in a new page showing all the details of the software (example of what details we should see are IP, NAME, HOSTNAME)   Data is from: |inputlookup product
Howdy I am using a Splunk Cloud 14 day free trial. I need to setup a Forwarder however in my tenant it will not allow me to enable. Please tell me a trial is not limited to certain items? Really d... See more...
Howdy I am using a Splunk Cloud 14 day free trial. I need to setup a Forwarder however in my tenant it will not allow me to enable. Please tell me a trial is not limited to certain items? Really does defeat the purpose of trialing a product before committing to purchase. Is this expected or an issue with the tenant at creation? Cheers Dan
I have a very large, slow data set to query, and I'd like to provide a report covering a sliding window from -21 days ago until 'now' for use on a dashboard. It's far too slow to run a query like thi... See more...
I have a very large, slow data set to query, and I'd like to provide a report covering a sliding window from -21 days ago until 'now' for use on a dashboard. It's far too slow to run a query like this on demand. To work around that, I've added multiple reports:  A) a daily scheduled report to query 21 days of data ending on the start of the current day. Takes 30 minutes to run each night. B) an hourly scheduled report to query data from the start of the current day until the top of the hour. takes ~5 minutes. C) a final real-time report that uses `append' and 'loadjob` to combine A and B, plus gathers any new data from the last time B was run. The dashboard then references report C and sets it as the base search for the dashboard. The end result is 21 days of recent data with minimal wait time for the dashboard user.  It's working, but clunky. The dashboard can assemble the data in about a minute, but report C ends up duplicating all the data from A and B, which is causing storage issues. I'm hoping there is a better way to maintain a sliding window of data that uses a more efficient means to add new events and purge the old ones. Suggestions?
I have a working connection with ServiceNow and can pull data from the database based on my inputs.  However one thing I am trying to do is pull down information related to a custom database view tha... See more...
I have a working connection with ServiceNow and can pull data from the database based on my inputs.  However one thing I am trying to do is pull down information related to a custom database view that is not a database table that would normally be found in the SNOW database.  For example I have a custom view called u_customview_splunk.  Is there a way to pull down a custom view from ServiceNow with the current TA?  I  have tried configuring the input as follows   [snow://u_customview_splunk] account = servicenowaccount duration = 60 id_field = sys_id index = snow_index since_when = 2000-01-01 00:00:00 table = u_customview_splunk timefield = sys_updated_on disabled = 0   Is there another variable that can be called (i.e. instead of "table = xxxxx", something like "view = ......").  I haven't seen this yet in the doco.  
Hello community! I am looking at various configurations options for AWS BYOL running ES.  We'll be leveraging SmartStore and just had a couple of questions on the architecture: Are we better off... See more...
Hello community! I am looking at various configurations options for AWS BYOL running ES.  We'll be leveraging SmartStore and just had a couple of questions on the architecture: Are we better off with twice as many i3.4xlarge (e.g. 10x for 1TB) or half as many i3.8xlarge (e.g. 5x)? Given the ephemeral nature are the i3 disks configured in a RAID0? Thanks!
I have installed SAI on test environment (standalone Splunk instance on Centos 7) and I added a Windows 10 entity. The entity does not appear in the UI, then  I noticed the indexes  em_metrics, em_m... See more...
I have installed SAI on test environment (standalone Splunk instance on Centos 7) and I added a Windows 10 entity. The entity does not appear in the UI, then  I noticed the indexes  em_metrics, em_meta and infra_alerts are not created in the Splunk.   Shouldn't they be created during app installation? What should I do now? Thank you.