All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi Guys, I'm looking a better app than Meta Woot for the following reasons -> To track our licenses per index/sourcetype/host/source. -> To graph over a period of 1 month, quarter, 6 months and on... See more...
hi Guys, I'm looking a better app than Meta Woot for the following reasons -> To track our licenses per index/sourcetype/host/source. -> To graph over a period of 1 month, quarter, 6 months and one year  Could anyone please suggest anything that you are using Regards Kavya  
I want to export a list of email address that configured on all report/alert email receiver, Is it possible to do that ? Appreciated if anyone can help.
I am trying to join two searches based on closest time to match ticketnum with its real event e.g. index=monitoring, 12:01:00 host=abc  status=down 3:05:00  host=abc status=down index=ticket 12.... See more...
I am trying to join two searches based on closest time to match ticketnum with its real event e.g. index=monitoring, 12:01:00 host=abc  status=down 3:05:00  host=abc status=down index=ticket 12.03:00 host=abc  ticketnum=inc123 3:07:00 host=abc  ticketnum=inc456 Any idea on how to join these two based on closest time?  
I'm trying to extract fields from the following event data [Scenario_summary] Scenario Type=Manual Scenario Goal Profile Name=Schedule 1 Mode=Scenario Scheduling Scenario Duration=Start 27 Vuser... See more...
I'm trying to extract fields from the following event data [Scenario_summary] Scenario Type=Manual Scenario Goal Profile Name=Schedule 1 Mode=Scenario Scheduling Scenario Duration=Start 27 Vusers: 1 every 00:00:15 (HH:MM:SS); Run for 00:30:00 (HH:MM:SS); Stop all Vusers simultaneously Load Behavior=Initialize each Vuser just before it runs [Scripts] apache_on_5154=D:\LoadRunner Repo\LoadRunner\Loadrunner Scripts\apache_on_5154\apache_on_5154.usr AjaxClickAndScript1=D:\LoadRunner Scripts\SPLUNK_SOLUTION_2\AjaxClickAndScript1\AjaxClickAndScript1.usr AddNewCustomer=D:\CPE_Demo\AddNewCustomer\AddNewCustomer.usr [Scripts_types] apache_on_5154=Multi+QTWeb AjaxClickAndScript1=WebAjax AddNewCustomer=Multi+QTWeb   Specifically I want to extract the LoadRunner group names and protocols below the [Scripts_types] which can be 1 to n depending on the Scenerio.  In this example the script names would be apache_on_5154 AjaxClickAndScript1 AddNewCustomer I've tried a regular expression with a named group to extract the fields. eg \[Scripts_types\]\n(?<Group1>.+)=.+ \[Scripts_types\]\n(?:.+)=.+\n(?<Group2>.+)=.+\n(?:.+)=.+ \[Scripts_types\]\n(?:.+)=.+\n(?:.+)=.+\n(?<Group3>.+)=.+ But that gives me 3 different named fields and does not cater for case where there are more of less of these lines in the event.  If I use a repeating group like \[Scripts_types\](\n(?<GroupName>.+)=.+)+ It only captures the last iteration ie "AddNewCustomer"  Anyone know how to deal with this?     
I'm having trouble formulating a search query for the following data: When the number of "tests-failed" has exceeded 20% of "tests-total". How do you do percentages? P.S. i'm working with metrics ... See more...
I'm having trouble formulating a search query for the following data: When the number of "tests-failed" has exceeded 20% of "tests-total". How do you do percentages? P.S. i'm working with metrics data.
Hello,  I'm having trouble finding an alternative to the mcatalog values(_value) command for metrics. In the documentation, it says that the values(_value) is not allowed, so what is another way I c... See more...
Hello,  I'm having trouble finding an alternative to the mcatalog values(_value) command for metrics. In the documentation, it says that the values(_value) is not allowed, so what is another way I can just display all the "_value" that was sent to Splunk? For example, i can do | mcatalog values(region) and display all the regions that a user was sending.
Need to replace strings present below in a field with the respective values. Field1 = "This field contains the information about students: student1, student2; student3.....studentN" Field2 ="studen... See more...
Need to replace strings present below in a field with the respective values. Field1 = "This field contains the information about students: student1, student2; student3.....studentN" Field2 ="student1:{first_name:ABC,last_name:DEF},student2:{first_name:GHI,last_name:JKL),student3:{first_name & again the same information till StudentN Need to create a new field which contains value of first_name & last_name from Field2 and replace those values with student1,student2....studentN in Field1 N would vary in each event. it could be [0-100] What is expected- Expected_Field="This field contains the information about students:ABC DEF, GHI JKL, till the end N Suppose the total events is 3 , then Expected_Field needs to be created for all 3 events.  Ask is to parse the information(names) out of Field2 and Replace with Student in Field1.
[bin]$ ./splunk start Splunk> Like an F-18, bro. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Check... See more...
[bin]$ ./splunk start Splunk> Like an F-18, bro. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket add_on_builder_index analysis_meta aws_db_status captain_america databricks_hec_webhook databricks_sqs_s3 databricks_webhook databricksjobruns databricksjobs dl_cluster em_meta em_metrics history infra_alerts iron_man2 main mysql1 platform_versions rss_feed summary talend_error tmc_info_warn trackme_metrics trackme_summary Done Checking filesystem compatibility... Done Checking conf files for problems... Invalid key in stanza [email] in /home/******/splunk/etc/apps/search/local/alert_actions.conf, line 2: show_password (value: True). Invalid key in stanza [mariadb] in /home/******/splunk/etc/apps/splunk_app_db_connect/default/db_connection_types.conf, line 240: supportedMajorVersion (value: 3). Invalid key in stanza [mariadb] in /home/******/splunk/etc/apps/splunk_app_db_connect/default/db_connection_types.conf, line 241: supportedMinorVersion (value: 1). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Done Checking default conf files for edits... Validating installed files against hashes from '/home/******/splunk/splunk-8.0.5-a1a6394cc5ae-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... Done [ OK ] Waiting for web server at http://127.0.0.1:8000 to be available..................................................splunkd 18464 was not running. Stopping splunk helpers... [ OK ] Done. Stopped helpers. Removing stale pid file... done. WARNING: web interface does not seem to be available!   I have checked splunkd.log file but still cant figure out what is the error which is not allowing to start the splunk daemon Can someone please help on above issue.  @richgalloway  help needed
The device sends the logs by means of syslog to the heavy forwarder who receives it, stores it and tries to send it to the indexers, but the errors that I attach appear.   Search head 1            ... See more...
The device sends the logs by means of syslog to the heavy forwarder who receives it, stores it and tries to send it to the indexers, but the errors that I attach appear.   Search head 1                      Search Head2                         ↑                                       ↑ Index1                                     index2             ↑                                      ↑ Heavy Forwarder  Heavy Forwarder   Error 1: 09-07-2020 01:19:40.949 -0500 WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to host_dest=(ip of indexer) inside output group default-autolb-group from host_src=(ip folder source) has been blocked for blocked_seconds=100. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.   Error 2  09-07-2020 10:50:05.169 -0500 WARN TailReader - Enqueuing a very large file=/xxx/xxx/xxxxxx/2020/09/07/user.log in the batch reader, with bytes_to_read=1846637230, reading of other large files could be delayed All    
I have a monthly dashboard to use and i want to refresh it only once a month and not every time when i open it in splunk. Is there a way to do that? For e.g.: On every 2nd of month it should get ref... See more...
I have a monthly dashboard to use and i want to refresh it only once a month and not every time when i open it in splunk. Is there a way to do that? For e.g.: On every 2nd of month it should get refreshed the data and then during the whole month when ever i visit it, it should show the same data to me.
I'm not able to search cloud-front logs from S3. There is no results. But I'm able to search ELB logs and Cloud-trail logs from S3.  Below are my input.conf  [splunk_ta_aws_logs://Cloudfront_logs] ... See more...
I'm not able to search cloud-front logs from S3. There is no results. But I'm able to search ELB logs and Cloud-trail logs from S3.  Below are my input.conf  [splunk_ta_aws_logs://Cloudfront_logs] aws_account = splunk_DEV bucket_name = Mybucketname bucket_region = us-east-1 host_name = s3.amazonaws.com interval = 1800 log_file_prefix = cdn_logs log_name_format = ABCDEFGH.%Y-%m-%d- log_start_date = 2020-01-01 log_type = cloudfront:accesslogs max_fails = 10000 max_retries = -1 sourcetype = aws:cloudfront:accesslogs
Do I need dedicated syslog server to get syslog messages and then forward it using Universal Forwarder??Considering I've installed on Splunk add on for Netscaler over a HF. If this is to be then what... See more...
Do I need dedicated syslog server to get syslog messages and then forward it using Universal Forwarder??Considering I've installed on Splunk add on for Netscaler over a HF. If this is to be then what is significance of having add on over UF.    OR Can I directly listen on heavy forwarder over a port 514 to get messages.Considering I've installed on Splunk add on for Netscaler over a HF.   Can I manage any configuration regarding this add on using Deployment Server?? Like managing which inputs to be monitored and all. PS: I'm new with Netscaler
Something weird started happening in our Splunk environment for ITSI native saved search : service_health_monitor This search started getting 100% skipped with reason: The maximum number of concurre... See more...
Something weird started happening in our Splunk environment for ITSI native saved search : service_health_monitor This search started getting 100% skipped with reason: The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached So, I checked the jobs section and found that the search was stuck running at x% < 100 and hence the next scheduled search could not start. So tried deleting that one, so that it can run in next run, but the next run showed the same behaviour ie, stuck halfway.  Inspect job shows most of the time was spent on startup.handoff and below is what I can see in the end of savedsearch.log that after the noop process (BEGIN OPEN: Processor=noop) splunk seems stuck. Please provide any insights which can help in investigating further.   09-07-2020 17:41:05.680 INFO LocalCollector - Final required fields list = Message,_raw,_subsecond,_time,alert_level,alert_severity,app,index,indexed_is_service_max_severity_event,is_service_in_maintenance,itsi_kpi_id,itsi_service_id,kpi,kpiid,prestats_reserved_*,psrsvd_*,scoretype,service,serviceid,source,urgency 09-07-2020 17:41:05.680 INFO UserManager - Unwound user context: splunk-system-user -> NULL 09-07-2020 17:41:05.680 INFO UserManager - Setting user context: splunk-system-user 09-07-2020 17:41:05.680 INFO UserManager - Done setting user context: NULL -> splunk-system-user 09-07-2020 17:41:05.680 INFO UserManager - Unwound user context: splunk-system-user -> NULL 09-07-2020 17:41:06.105 INFO UserManager - Unwound user context: splunk-system-user -> NULL 09-07-2020 17:41:06.105 INFO UserManager - Unwound user context: splunk-system-user -> NULL 09-07-2020 17:41:06.105 INFO UserManager - Unwound user context: splunk-system-user -> NULL 09-07-2020 17:41:06.105 INFO UserManager - Unwound user context: splunk-system-user -> NULL 09-07-2020 17:41:06.105 INFO UserManager - Unwound user context: splunk-system-user -> NULL 09-07-2020 17:41:06.171 INFO ChunkedExternProcessor - Exiting custom search command after getinfo since we are in preview mode:gethealth 09-07-2020 17:41:06.177 INFO SearchOrchestrator - Starting the status control thread. 09-07-2020 17:41:06.177 INFO SearchOrchestrator - Starting phase=1 09-07-2020 17:41:06.177 INFO UserManager - Setting user context: splunk-system-user 09-07-2020 17:41:06.177 INFO UserManager - Setting user context: splunk-system-user 09-07-2020 17:41:06.177 INFO UserManager - Done setting user context: NULL -> splunk-system-user 09-07-2020 17:41:06.177 INFO UserManager - Done setting user context: NULL -> splunk-system-user 09-07-2020 17:41:06.177 INFO ReducePhaseExecutor - Stating phase_1 09-07-2020 17:41:06.177 INFO SearchStatusEnforcer - Enforcing disk quota = 26214400000 09-07-2020 17:41:06.177 INFO PreviewExecutor - Preview Enforcing initialization done 09-07-2020 17:41:06.177 INFO DispatchExecutor - BEGIN OPEN: Processor=stats 09-07-2020 17:41:06.209 INFO ResultsCollationProcessor - Writing remote_event_providers.csv to disk 09-07-2020 17:41:06.209 INFO DispatchExecutor - END OPEN: Processor=stats 09-07-2020 17:41:06.209 INFO DispatchExecutor - BEGIN OPEN: Processor=gethealth 09-07-2020 17:41:06.217 INFO DispatchExecutor - END OPEN: Processor=gethealth 09-07-2020 17:41:06.217 INFO DispatchExecutor - BEGIN OPEN: Processor=noop 09-07-2020 17:48:07.948 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW    
I am running 2 search:   | rest splunk_server=* /services/data/indexes-extended | search title = _internal | stats max(bucket_dirs.home.warm_bucket_count) by title | dbinspect index=_internal |... See more...
I am running 2 search:   | rest splunk_server=* /services/data/indexes-extended | search title = _internal | stats max(bucket_dirs.home.warm_bucket_count) by title | dbinspect index=_internal | search state = warm | stats count Both are run for all time, why am i getting different count of warm data. Also, my max warm bucket count is restriceted to 450, while rest api call is giving me a no below this, dbinspect is giving me 2550. How is this possible.
Hello All, I'm trying to prevent the 'USERID' events from getting indexed by making the following changes on my Heavy Forwarder. However, after adding the TRANSFORMS-null statement and the [setnull] ... See more...
Hello All, I'm trying to prevent the 'USERID' events from getting indexed by making the following changes on my Heavy Forwarder. However, after adding the TRANSFORMS-null statement and the [setnull] stanza in transforms.conf, I'm not seeing any logs getting indexed at all. Any guidance is appreciated inputs.conf [monitor:///var/log/palo] disabled = false sourcetype = pan:traffic   props.conf [pan:traffic] TRANSFORMS-null = setnull TZ = America/New_York TRANSFORMS-host = paloalto-host DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = true disabled = false pulldown_type = true   transforms.conf [paloalto-host] SOURCE_KEY = _raw FORMAT = host::$1 DEST_KEY = MetaData:Host [setnull] REGEX = ^(?:[^,\n]*,){3}USERID DEST_KEY = queue FORMAT = nullQueue
I have created a custom search command to decode a hexadecimal field in IoT messages. It works fine when used from a simple search, but if I try to then use the search in a dashboard panel no data is... See more...
I have created a custom search command to decode a hexadecimal field in IoT messages. It works fine when used from a simple search, but if I try to then use the search in a dashboard panel no data is returned. I can't think why this would be the case. The search command is in its own app which has been distributed to the search head and indexer, and has global accessibility. The command is a streaming command which takes a hexadecimal field as input and outputs fields and values based on the hexadecimal value Thanks Adam
Hello Splunkers , I want to like to integrate Splunk and ServiceNow and  send the triggered alerts to SNOW as an incident. I know there is an app in Splunkbase to integrate with SNOW. But i dont fin... See more...
Hello Splunkers , I want to like to integrate Splunk and ServiceNow and  send the triggered alerts to SNOW as an incident. I know there is an app in Splunkbase to integrate with SNOW. But i dont find the steps on how to configure to send the alerts as an incident in SNOW.  Can someone help me with the high level steps?   Thanks in Advance
Hi,  I'm very new to splunk and have signed up for a trial account and am looking to connect with HTTP Event Collector using simple Curl commands. Following the docs here : https://docs.splunk.... See more...
Hi,  I'm very new to splunk and have signed up for a trial account and am looking to connect with HTTP Event Collector using simple Curl commands. Following the docs here : https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Data/UsetheHTTPEventCollector I've set up my new token using the default configuration. I'm certain I'm using the managed service as I don't have any access to global settings. According to the above docs to make calls using curl the endpoint is <protocol>://http-inputs-<host>:<port>/<endpoint> however I can't get anything other than curl: (6) Could not resolve host: http-inputs-<hostname>.splunkcloud.com Do I need to enable something in my account so this host becomes available? I'm using my login URL (from the screenshot below) as the hostname. That is correct yes?
Hi All, We are using Splunk Cloud in our environment and currently we are ingesting 150 GB of licensing per day. So we have a plan to extend additional 25 GB of licenses for our Splunk Cloud subscri... See more...
Hi All, We are using Splunk Cloud in our environment and currently we are ingesting 150 GB of licensing per day. So we have a plan to extend additional 25 GB of licenses for our Splunk Cloud subscription. So I want to know how much does it costs (approx) for additional 25 GB of licenses so that i can have a look into it and update the quotation to the management and get the same.   It will be really helpful if  someone can provide an approx quote for the same.  
Hi Team, We have recently upgraded the Splunk to version 8.0 . So the dashboards and reports whichever created in Advanced XML is not working so re-creating it in Simple XML. Earlier we were using a... See more...
Hi Team, We have recently upgraded the Splunk to version 8.0 . So the dashboards and reports whichever created in Advanced XML is not working so re-creating it in Simple XML. Earlier we were using a view called Flashtimeline, which will show the normal events in a very organised and user understandable way. This view was created in Advanced XML. Below is a sample log event, which is displayed in the view called flashtimeline (This displayed when we just give "index=xx sourcetype=yy host=zz").   Host: xxx Service: DATABASE Has Details: N Is Sample: N Process Name: yyy Request Id: zzz Request: {call sp_name('*')} Start Timestamp: 2020-09-07 04:29:56.986 End Timestamp: 2020-09-07 04:29:57.242 Timing Details (Total Exec Time=256 ms) Name Time since beginning (ms) Execution Time(ms) % of Total Time BEGIN 0 0 0 preExecution 0 0 0 prepareStatement 0 0 0 setParameters 0 0 0 executeQuery 0 251 98 handleExecution 251 5 1 END 256 0 0     This is a very neat, organised and easily understandable format of an event.  Here is the actual display of the same event, when we try to search in the Search app with the same query "index=xx sourcetype=yy host=zz"   "2020-09-07 04:29:00.995","10.241.140.193","DATABASE","sp_name","2020-09-07 04:29:01.197","xxx","1","202","","BEGIN","1599452940995","preExecution","1599452940995","prepareStatement","1599452940995","setParameters","1599452940995","executeQuery","1599452940995","handleExecution","1599452940998","END","1599452941197","-","-1","-","-1","-","-1","yyy","-","zzz","N"," {CALL sp_name(?, ?, ?, ?)}","N"     I am aware the Flashtimeline was deprecated during the Splunk 6.x version itself and it was replaced with Search app. But I would like to display the events in a neat and organised way (as like the first sample event) with Simple XML code. Could anyone please help me on getting this as soon as possible.