All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,   We are trying to get metrics into Splunk using TCP, so far we have tried the following:   inputs.conf [tcp://44444] connection_host = ip index = metrics_idx sourcetype = "json_no_timest... See more...
Hi,   We are trying to get metrics into Splunk using TCP, so far we have tried the following:   inputs.conf [tcp://44444] connection_host = ip index = metrics_idx sourcetype = "json_no_timestamp" or "_json" or "metrics_csv"   We can get this to work if we change sourcetype to statd and emulate the statd protocol, but we found this to be very limited.   We have 30 odd machines collecting "1000s" of data endpoints (mainly counters - was 5 things, now 12) - what would be the best way to get this into Splunk, without using JSON/CSV files...   Thanks !
Hi I'm wondering if it's possible to define and execute a macro from a lookup.  I have an index with several (about 50) user actions, which aren't named in a user friendly manner.  Additionally, eac... See more...
Hi I'm wondering if it's possible to define and execute a macro from a lookup.  I have an index with several (about 50) user actions, which aren't named in a user friendly manner.  Additionally, each action has different fields, which I'd like to extract using inline rex queries.  In short, I'd like a table with the following: Time UserName Message 10:00 a.m. JohnDoe This is action1.  Details for action1. 10:01 a.m. JohnDoe This is action2.  Details for action2. 10:02 a.m.  JohnDoe This is action3.  Details for action3.   I know can define a friendly name for the action using a lookup.  I can also do the rex field extractions and compose a details field using a macro for each action.  However, is there a way to also rex the fields and define the details in a lookup?   I was thinking of creating a lookup like this: Action FriendlyDescription MacroDefinition action1 "This is action1" | rex to extract fields for action1 | eval for Details for action1 action2 "This is action2" | rex to extract fields for action2 | eval for Details for action2 action3 "This is action3" | rex to extract fields for action3 | eval for Details for action3   I was thinking about something like this:   index=MyIndex source=MySource | lookup MyLookup.csv ActionId OUTPUT FriendlyDescription, MacroDefinition `code to execute MacroDefinition` |table _time, UserName, FriendlyDescription, Details for action   I'm not sure if i'm barking up the wrong tree, but the reason I'd like to do this in one place (a lookup) instead of 50 different macro definitions.  It'd be neat to have all the code in one place. Thanks!            
Using Splunk Cloud  Add-on for Salesforce I had Configuration all setup and it gave me a green light saying Successfully add an account but when I move over to Input tab I get "request failed with... See more...
Using Splunk Cloud  Add-on for Salesforce I had Configuration all setup and it gave me a green light saying Successfully add an account but when I move over to Input tab I get "request failed with status code 500 splunk saleforce add on input" Error  on version 4.9 I ran a update on the app it is now on  Version 4.10.0   and now I get this on the Input Page    
This is a followup question to the solution on this thread: https://community.splunk.com/t5/Getting-Data-In/create-multiple-sourcetypes-from-single-syslog-source/m-p/701337/highlight/false#M116063 ... See more...
This is a followup question to the solution on this thread: https://community.splunk.com/t5/Getting-Data-In/create-multiple-sourcetypes-from-single-syslog-source/m-p/701337/highlight/false#M116063 I'm trying to do exactly what the original question asked but I need to apply different DELIM/FIELDS values to the different sourcetypes I create this way. The solution says that once the new sourcetype is created "...just use additional transforms entries with regular expressions that fit the specific subset of data..." does this mean that if I want to further extract fields from the new sourcetype I can only do that using TRANSFORMS from that point forward or would I be able to put a new stanza further down in the props.conf for [my_new_st] and use additional REPORTs or EXTRACTs that only apply to that new sourcetype? For example, can I do something like the following?: Description: first split the individual events based on the value regex-matched on the 5th field then do different field extracts for each of the new sourcetypes.      props.conf: [syslog] TRANSFORMS-create_sourcetype1 = create_sourcetype1 TRANSFORMS-create_sourcetype2 = create_sourcetype2 [sourcetype1] REPORT-extract = custom_delim_sourcetype1 [sourcetype2] REPORT-extract = custom_delim_sourcetype2           transforms.conf: [create_sourcetype1] REGEX = ^(?:[^ \n]* ){5}(my_log_name_1:)\s DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::sourcetype1 [create_sourcetype2] REGEX = ^(?:[^ \n]* ){5}(my_log_name_2:)\s DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::sourcetype2 [custom_delim_sourcetype1] DELIMS = " " FIELDS = d_month,d_date,d_time,d_source,d_logname,d_info,cs_url,cs_bytes,cs_port [custom_delim_sourcetype2] DELIMS = " " FIELDS = d_month,d_date,d_time,d_source,d_logname,d_info,cs_username,sc_http_status      
We have started to notice that since our recent upgrade to Splunk Cloud 9.2 we have been facing authentication issues where our add-ons would stop working due to authentication like "No AWS account n... See more...
We have started to notice that since our recent upgrade to Splunk Cloud 9.2 we have been facing authentication issues where our add-ons would stop working due to authentication like "No AWS account named" or "Unable to obtain access token" and "Invalid client secret provided" for our AWS and Azure add-ons basically anything requiring Splunk to decrypt credentials stored in passwords.conf has anyone else had this problem? We're currently engaged with Splunk Support to find a root cause for this problem. I would love to know if anyone else had faced this same problem since upgrading to 9.2?
I'm trying to implement the Splunk Machine Learning Toolkit Query, found here: https://github.com/splunk/security_content/blob/develop/detections/cloud/abnormally_high_number_of_cloud_security_group_... See more...
I'm trying to implement the Splunk Machine Learning Toolkit Query, found here: https://github.com/splunk/security_content/blob/develop/detections/cloud/abnormally_high_number_of_cloud_security_group_api_calls.yml Actually just the first part: | tstats count as all_changes from datamodel=Change_test where All_Changes.object_category=* All_Changes.status=* by All_Changes.object_category All_Changes.status All_Changes.user But I'm getting this error   How do I fix this?
i have this on other panels but cant get it on a stacked column chart   | streamstats current=f last(Timestamp) as HaltedCycleLastTime by Cycle | eval HaltedCycleSecondsHalted=round(HaltedCycleLa... See more...
i have this on other panels but cant get it on a stacked column chart   | streamstats current=f last(Timestamp) as HaltedCycleLastTime by Cycle | eval HaltedCycleSecondsHalted=round(HaltedCycleLastTime - Timestamp,0) | eval HaltedCycleSecondsHalted=if(HaltedCycleSecondsHalted < 20,HaltedCycleSecondsHalted,0) | streamstats time_window=30d sum(HaltedCycleSecondsHalted) as HaltedCycleSecondsPerDayMA | eval HaltedCycleSecondsPerDayMA=round(HaltedCycleSecondsPerDayMA,0) | chart sum(HaltedCycleSecondsHalted) as HaltedSecondsPerDayPerCycle by CycleDate Cycle limit=0 this produces a stacked column based on the chart command , but in dashboard studio i expect to see HaltedCycleSecondsPerDayMA as a pickable field and i dont. I added to code as overlayfields but still not showing.
On Browser Tests we have Auto-retry enabled and when a Test fails, Auto-retry kicks in and updates results,  so on the browser test Page Availability section clicking somewhere could see this flyout... See more...
On Browser Tests we have Auto-retry enabled and when a Test fails, Auto-retry kicks in and updates results,  so on the browser test Page Availability section clicking somewhere could see this flyout "Multiple runs found for Uptime",  How to view this section? (Having hard time )     
We have some tokens that are due to expire shortly. Q1: Does the 'Default' token automatically rotate? Q2: How do you manually rotate a token using the dashboard? (I am aware of the API option) Q3... See more...
We have some tokens that are due to expire shortly. Q1: Does the 'Default' token automatically rotate? Q2: How do you manually rotate a token using the dashboard? (I am aware of the API option) Q3: If the API call is the only option, what permissions are required to make the 'rotate' API call? Thanks in anticipation. Ian
Hi  Can someone please tell me how we can compare the value of a particular day with the value of the same day of last week and create a new field as deviation.  Example :  Below command generates... See more...
Hi  Can someone please tell me how we can compare the value of a particular day with the value of the same day of last week and create a new field as deviation.  Example :  Below command generates the output as below :  | stats sum(Number_Events) as TOTAL by FIeld1 FIeld2  FIeld3 Day  Time Week_of_year Total We need the output like below :  1. In tabular form : Is it possible to have an output like below :  2. If point 1 is possible to be created , then Is it possible to have a time-chart with 3 lines over the 24 hours of the day . Example of data for 3 hours is attached  1 line corresponds to Week of year -2 (39) 2nd line corresponds to Week of year -1 (40) 3rd line corresponds to Week of year (41)   Thanks in advance to help me out.   
Hello, I'm figuring out the best way to address the above situation. We do have a huge multisite cluster with 10 indexers on each site, a dedicated instance should act as the sc4s instance and send ... See more...
Hello, I'm figuring out the best way to address the above situation. We do have a huge multisite cluster with 10 indexers on each site, a dedicated instance should act as the sc4s instance and send everything to a load balancer whose job will be to forward everything to the cluster.  now, there are several documentations about the implementation but I still can't wrap my head around the direct approach.  the SC4S config stanza would currently look something like this :   [http://SC4S] disabled = 0 source = sc4s sourcetype = sc4s:fallback index = main indexes = main, _metrics, firewall, proxy persistentQueueSize = 10MB queueSize = 5MB token = XXXXXX     several questions about that tho: - I'd need to create a hec token first, before configuring SC4S, but in a clustered environment - where do I create the hec token? I've read that I should create it on the CM and then push it to the peers but how exactly? I can't find much info about the specifics. especially since I try to configure it via config files.. so an example of the correct stanza that has to be pushed out would be somehow great - just can't find any.  - once pushed I need to configure the sc4s on the other side including the generated token (as seen above), does the config here seem correct? theres a lack of example configs so I'm spitballing here a little bit.   Kind regards
I am trying to use the credentials of my friend to log into Splunk Enterprise, and I am unable to do that.  Also, I am using ODBC to connect Splunk with Power BI, and when I do that locally, I am ab... See more...
I am trying to use the credentials of my friend to log into Splunk Enterprise, and I am unable to do that.  Also, I am using ODBC to connect Splunk with Power BI, and when I do that locally, I am able to do that, but when I am trying to do that remotely, I am unable to do that. I am having issues with server URL and port number. Any help would be appreciated to solve these queries. TIA.
Hi splunkers !   I got a question about memory.    In my splunk monitoring console, I get approx 90% of memory used by splunk processes. The amount of memory is 48 Gb In my VCenter, I can see th... See more...
Hi splunkers !   I got a question about memory.    In my splunk monitoring console, I get approx 90% of memory used by splunk processes. The amount of memory is 48 Gb In my VCenter, I can see that only half of the assigned memory is used (approx 24 Gb over 48Gb available).   Who is telling me the truth : Splunk monitoring or Vcenter. And overall, is there somthing to configure in Splunk to fit the entire available memory.   Splunk 9.2.2 / redhat 7.8 Thank you .   Olivier.
i have created a stacked bar based on a data source (query) and everything works with the exception of: i have to select each data value to display when the query runs through Data Configuration - Y... See more...
i have created a stacked bar based on a data source (query) and everything works with the exception of: i have to select each data value to display when the query runs through Data Configuration - Y meaning all of my desired values show up there but they are not "selected" by default so the chart is blank until i select them?
My query is    index=stuff | search "kubernetes.labels.app"="some_stuff" "log.msg"="Response" "log.level"=30 "log.response.statusCode"=200 | spath "log.request.path"| rename "log.request.path" as u... See more...
My query is    index=stuff | search "kubernetes.labels.app"="some_stuff" "log.msg"="Response" "log.level"=30 "log.response.statusCode"=200 | spath "log.request.path"| rename "log.request.path" as url | convert timeformat="%Y/%m/%d" ctime(_time) as date | stats min("log.context.duration") as RT_fastest max("log.context.duration") as RT_slowest p95("log.context.duration") as RT_p95 p99("log.context.duration") as RT_p99 avg("log.context.duration") as RT_avg count(url) as Total_Req by url   And i am getting the attached screenshot response. I want to club all the similar api's like all the /getFile/* as one API and get the average time
Hi  I have events that having multiple countries... I want to count the country field and with different time range. It is need to sort by highest country to lowest. EX   Country         Last 24h  ... See more...
Hi  I have events that having multiple countries... I want to count the country field and with different time range. It is need to sort by highest country to lowest. EX   Country         Last 24h     Last 30 days     Last 90 days            US                       10                   50                            100            Aus                       8                     35                              80 I need query kindly assist me.
I have ingested data form influx DB to Splunk Enterprise using influxDB add from splunk db connect. Performing InfluxQL search in SQL explorer of created influx connection. I am getting empty values... See more...
I have ingested data form influx DB to Splunk Enterprise using influxDB add from splunk db connect. Performing InfluxQL search in SQL explorer of created influx connection. I am getting empty values for value column. Query: from(bucket: "buckerName") |> range(start: -6h) |> filter(fn: (r) => r._measurement == "NameOfMeasurement") |>filter(fn: (r) => r._field == "value") |> yield(name: "count")     Splunk DBX Add-on for InfluxDB JDBC 
初歩的な質問で失礼いたします。弊社ではPoCとして、Splunk Enterprise Trial Licenseをご提供いただき、まずはpalo altoのログを取り込んで、(メール等で)アラートを発報させたいと思っていますが、どのようにすればいいかわかりません。(手動で過去のログを取り込むことはできましたが、過去のログに対してアラートは出せないですよね。日付を現在にすれば出るのでしょうか。ま... See more...
初歩的な質問で失礼いたします。弊社ではPoCとして、Splunk Enterprise Trial Licenseをご提供いただき、まずはpalo altoのログを取り込んで、(メール等で)アラートを発報させたいと思っていますが、どのようにすればいいかわかりません。(手動で過去のログを取り込むことはできましたが、過去のログに対してアラートは出せないですよね。日付を現在にすれば出るのでしょうか。また出し方もわかっておりませんが。。) 環境はFJCloudに仮想サーバを1台立てて、そこでSplunkを動かしていますが、他のサーバにForwarderを入れたりなどはしていないです。 どなたかご存知の方、教えていただければ幸甚です。よろしくお願いいたします。
I migrated to v9.1.5 and have the TA-XLS app installed and working from a v7.3.6.  Commanding an 'outputxls' will generate a 'cannot concat str to bytes' error for the following line of the outputxl... See more...
I migrated to v9.1.5 and have the TA-XLS app installed and working from a v7.3.6.  Commanding an 'outputxls' will generate a 'cannot concat str to bytes' error for the following line of the outputxls.py file in the app:  try: csv_to_xls(os.environ['SPLUNK_HOME'] + "/etc/apps/app_name/appserver/static/fileXLS/" + output)Tried encoding by appending  .encode(encode('utf-8') to the string -> not working Tried importing the SIX and FUTURIZE/MODERNIZE libraries and ran the code to "upgrade" the script: it just added the and changed a line --> not working  from __future__ import absolute_import   Tried to define each variable, and some other --> not working  splunk_home = os.environ['SPLUNK_HOME'] static_path = '/etc/apps/app_name/appserver/static/fileXLS/' output_bytes = output csv_to_xls((splunk_home + static_path.encode(encoding='utf-8') + output))   I sort of rely on this app to work, any kind of help is needed! Thanks!            
  Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning messa... See more...
  Hi Splunk Community, I’ve generated self-signed SSL certificates and configured them in web.conf, but they don't seem to be taking effect. Additionally, I am receiving the following warning message when starting Splunk:   WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Could someone please help me resolve this issue? I want to ensure that Splunk uses the correct SSL certificates and that the hostname validation works properly.