All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good morning everyone, for my customer, i have a Splunk deployment as follow: 1 Search head 3 Indexer in cluster 1 Monitoring Console/License Master/Master node I need to integrate our Qual... See more...
Good morning everyone, for my customer, i have a Splunk deployment as follow: 1 Search head 3 Indexer in cluster 1 Monitoring Console/License Master/Master node I need to integrate our Qualys solution with Splunk, but i'm reading the Technology Add-on should be installed on a forwarder. However, we do not have an Heavy forwarder. Hence, could i install it on an Indexer? Is data replication still available for index qualys? Thanks in advance, Luca
Hello everyone I'm trying to get a list of ip addresses from an internet page and put them after that into a lookup table. My issius is that I can't use mvexpand to put every ip addresses into a sin... See more...
Hello everyone I'm trying to get a list of ip addresses from an internet page and put them after that into a lookup table. My issius is that I can't use mvexpand to put every ip addresses into a single row... here my search: | curl method=get uri=https://feodotracker.abuse.ch/downloads/ipblocklist_recommended.txt | fields curl_message | rex field=curl_message mode=sed "s/.*#//g" | rex field=curl_message mode=sed "s/DstIP//g" | rex field=curl_message mode=sed "s/^\s+//g" and as results I will get a big block of data in one single row. How can I split these in multiple rows?   Thank you all for the support.
I have JSON that is really an array of values but has been encoded as objects, something like this   { "metrics": { "timers" : { "foo_timer": { "count": 1, ... See more...
I have JSON that is really an array of values but has been encoded as objects, something like this   { "metrics": { "timers" : { "foo_timer": { "count": 1, "max": 452603, "mean": 452603, "min": 452603 }, "bar_some_other_timer": { "count": 1, "max": 367110, "mean": 367110, "min": 367110 } } } }   I can display this in a table by iterating using foreach, but what I really want to do is search for events where max > 400000, and then display it with the name of the timer - so in above that would match foo_timer.  The names of the timer can be anything and the order is not guaranteed. I've tried all sorts today and keep coming up short.
Unable to capture the application data when sample app is running with java agent. It worked last week but for the last 2 days, it stopped working.
Hi , I need a help in solving one of the issue, I have a table which is Shown below, I just want to hide the rows with the name consisting of "Raju", also if we export this table to CSV , it ... See more...
Hi , I need a help in solving one of the issue, I have a table which is Shown below, I just want to hide the rows with the name consisting of "Raju", also if we export this table to CSV , it should export all the results including the name "Raju" Can any one Please help us to solve this. Thankyou.
Dear Team, I just want to use the simple search below to see which indexes are having zero count that day/week/whichever time period. index= * | stats count by index | where count = 0 However, ... See more...
Dear Team, I just want to use the simple search below to see which indexes are having zero count that day/week/whichever time period. index= * | stats count by index | where count = 0 However, the search is not returning anything and if I remove the where count=0 it is only returning indexes with more than zero counts. How do I make sure that the indexes with count=0 are included? Thank you. Warm Regards.  
I created a trial account. Is it possible to configure Synthetic jobs using a trial account? I can see in license its possible to create it.
I have a Data Model called Web_Events with a root object called Access.  There is a field in Access called 'status_category' with values "client error", "server error", "okay" or "other". I am tryi... See more...
I have a Data Model called Web_Events with a root object called Access.  There is a field in Access called 'status_category' with values "client error", "server error", "okay" or "other". I am trying to list the count of events which have 'status_catgory' as "client error" and "server error" hour by hour So I want to generate a table of following format _time client_error_count server_error_count 2022-01-26:17:30:00 <count of client error> <count of server error> 2022-01-26:18:30:00 <count of client error> <count of server error>   Can anyone help me with this? The closest I could achieve was as following:  _time Access.status_category error_count 2022-01-26:17:30:00 server error 2 2022-01-26:18:30:00 client error 6 2022-01-26:18:30:00 server error 7   with help of this query: (status_code is another field which contains values of HTTP status codes) | tstats count(Access.status_code) as error_count from datamodel=Web_Events.Access where Access.status_code!=200 earliest="01/26/2022:00:00:00" latest="02/02/2022:23:59:59" BY Access.status_category _time span=1h | table _time, Access.status_category, error_count | sort _time  
I have two events that are semi-colon separated key value pairs. I have applied the extract command to parse the event into key value pairs. The aim is to compare the two events using the key and hig... See more...
I have two events that are semi-colon separated key value pairs. I have applied the extract command to parse the event into key value pairs. The aim is to compare the two events using the key and highlight the differences in values in a table format with the key as header and values as rows Event 1 35=D; 54=2; 40=1; 11=abc Event 2 35=G; 54=2; 40=2; 11=xyz Result 35|40|11 D|1|abc G|2|xyz Which function will index my keys so that I may compare their values and report in the above format? Extraction performed as follows: <search> | extract pairdelim=";" kvdelim="\=" clean_keys=false
I'm deployed a Splunk in VM. How to get the instance application Splunk metrics in Prometheus.
There are 2000 dashboards in Splunk. Out of which, some are used and some are not. How to check that? How to migrate it to ELK by filtering out unused or junk dashboards.
Hello, We are in the process of ingesting Palo Alto logs from a separate organization’s network into our instance of Spunk Enterprise Security (on-prem) which resides on another network. Connectivi... See more...
Hello, We are in the process of ingesting Palo Alto logs from a separate organization’s network into our instance of Spunk Enterprise Security (on-prem) which resides on another network. Connectivity between both of our organizations is facilitated through an interconnection provided by a product called Equinix. This way, data and file interchanges between our organizations are secure over the internet. I’m trying to determine a performance and cost-efficient method of ingesting the other organization’s logs into our network. We’re ingesting our internal organization Palo Alto FW logs by forwarding these to a syslog server, and they’re sent to our Spunk indexers from there. How different would the log ingestion mechanism for an external org’s Palo Alto logs be? Any help would be greatly appreciated!
Hello, I know we can send alerts from Splunk to BMC TrueSight. But i would like to get help on sending the events generated at BMC TrueSight to Splunk. Please help to know if there is any way t... See more...
Hello, I know we can send alerts from Splunk to BMC TrueSight. But i would like to get help on sending the events generated at BMC TrueSight to Splunk. Please help to know if there is any way to do it.   Thanks!
Hi Team/ @Anonymous  I have tried Instrumenting  .Net standalone application with the help of the below reference link  https://docs.appdynamics.com/display/PRO41/Instrument+Windows+Services+and... See more...
Hi Team/ @Anonymous  I have tried Instrumenting  .Net standalone application with the help of the below reference link  https://docs.appdynamics.com/display/PRO41/Instrument+Windows+Services+and+Standalone+Applications I could load the profiler and was able to see the application under the "Tiers & Nodes" but was unable to get the transaction snapshots or metrics in the appdynamics controller. I see the below error in the AgentLog.txt, 2022-02-10 08:41:24.0104 21836 AppDynamics.Coordinator 1 27 Error MachineAgentManager Metrics Error sending metrics - will requeue for later transmission Exception: com.appdynamics.ee.agent.commonservices.metricgeneration.metrics.MetricSendException: System.Exception: Failed to execute request to endpoint [https://****************.saas.appdynamics.com:443/controller/instance/76787/metrics_PB_]. For more details if needed please set trace level for logger [com.appdynamics.REST.RESTProtobufCommunicator] ---> System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. ---> System.IO.IOException: The handshake failed due to an unexpected packet format. at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- Note: I can get metrics for the .Net applications deployed in the IIS  Please look into this issue and help us to resolve this. Thanks in advance
Hi, all! How could I make this pattern "HKL20167991SIT_7_8299=true" from my log files into 'XXXX'(the last four digits) as the key and 'true/false' as the value? Here's my log file: HKL20167991... See more...
Hi, all! How could I make this pattern "HKL20167991SIT_7_8299=true" from my log files into 'XXXX'(the last four digits) as the key and 'true/false' as the value? Here's my log file: HKL20167991SIT_7_8299=true, HKL20167991SIT_8_8260=true, HKL20167991SIT_4_8296=true, HKL20167991SIT_26_8274=true, HKL20167991SIT_32_827A=true, HKL20167991SIT_29_8277=true, HKL20167991SIT_35_827D=true, HKL20167991SIT_22_828E=true, HKL20167991SIT_24_8272=true, HKL20167991SIT_1_825A=true, HKL20167991SIT_31_8279=true, HKL20167991SIT_9_8261=true, HKL20167991SIT_11_8263=true, HKL20167991SIT_14_8266=true, HKL20167991SIT_27_8275=true, HKL20167991SIT_17_8269=true, HKL20167991SIT_37_827F=true, HKL20167991SIT_28_8276=true, HKL20167991SIT_34_827C=true, HKL20167991SIT_20_827C=true, HKL20167991SIT_25_8273=true, HKL20167991SIT_12_8264=true, HKL20167991SIT_15_8267=true, HKL20167991SIT_5_8297=true, HKL20167991SIT_19_826B=true, HKL20167991SIT_3_8295=true, HKL20167991SIT_10_8262=true, HKL20167991SIT_13_8265=true, HKL20167991SIT_18_826A=true, HKL20167991SIT_16_8268=true, HKL20167991SIT_33_827B=true, HKL20167991SIT_36_827E=true, HKL20167991SIT_2_825B=true, HKL20167991SIT_21_827D=true, HKL20167991SIT_23_828F=true, HKL20167991SIT_30_8278=true, HKL20167991SIT_6_8298=true The result I need is: Port  Status 8299 true 827D true 8278 true
I'm using Splunk Enterprise 8.2.4 and trying to get my forwarders to forward perfmon counters (CPU, Disk Space etc.) into a metrics index at my indexer cluster. It seems to me from reading here: ... See more...
I'm using Splunk Enterprise 8.2.4 and trying to get my forwarders to forward perfmon counters (CPU, Disk Space etc.) into a metrics index at my indexer cluster. It seems to me from reading here: This requires the Splunk App for Infrastructure (which is EOL I think?) or the Splunk AddOn for windows? This might affect existing collection of metrics that are being indexed as events This is a configuration done at the indexer and not the forwarder In short, I'm confused on how to achieve this! Any assistance would be much appreciated!
I have a requirement to move indexed data from index-A to another index-B in a smart-store enabled cluster. Both indexes (A & B ) has data in the AWS s3-bucket.  I would like to know if the below ste... See more...
I have a requirement to move indexed data from index-A to another index-B in a smart-store enabled cluster. Both indexes (A & B ) has data in the AWS s3-bucket.  I would like to know if the below steps works? Steps : Stop the incoming data to index-A Roll the hot bucket  on index-A Move  the data from s3 for index-A  to index-B Using the aws s3 sync  s3://bucket-name/index-A  s3://bucket-name/index-B   Run bootstrap command on CM.
Hi All, I have configured non-IIS Windows services to be monitored under different tiers in my AppDynamics setup. How do I setup alerting on them, so when a service is stopped or restarted, I get a... See more...
Hi All, I have configured non-IIS Windows services to be monitored under different tiers in my AppDynamics setup. How do I setup alerting on them, so when a service is stopped or restarted, I get an alert? Thank you in advance Best, Adi
I am sure this is a pretty common use case, mainly because IP addresses move, the data is not static so for security retro hunts etc or even just searching a few days of data, the Geo data needs to b... See more...
I am sure this is a pretty common use case, mainly because IP addresses move, the data is not static so for security retro hunts etc or even just searching a few days of data, the Geo data needs to be static in the data and can't be a search lookup.  Technically i can't even think of a use case where you would ever want Geo data to be a search lookup but I am sure there are some use cases out there.  Elasticsearch has a couple options to do this, IE ingest nodes or logstash so I am sure a millions people are doing this in Splunk. If someone could point me at the documentation I would appreciate it. Closest thing I could find is ingest time eval but not sure how that does geoip enrichment
Warning:  Long, detailed explanation ahead.    Summary version is that I have a nested json arrays and fields that I am having an issue with extracting properly into individual fields.  The chosen ... See more...
Warning:  Long, detailed explanation ahead.    Summary version is that I have a nested json arrays and fields that I am having an issue with extracting properly into individual fields.  The chosen fields will change over time, based on external factors so I need to be able to extract and report on all of them, with the ability to identify the array index (i.e. {0}, or {1}, etc).   No solution that I have looked at or come up with is working for me, so I am turning to you smarter folks to help. Detail: I have a nested json arrays and fields that I am having an issue with extracting properly into individual fields.   The end result is that I want to be able to place alerts or report on various fields that are deemed interesting. These are "request" and "response" arrays in each transaction (think checking items in a shopping cart for various flag and indicators).    The chosen fields will change over time, based on external factors so I need to be able to extract them from the array  and report on all of them at some point. Here is a sample request and response As you can see the the request array is market_basket.request{} and the response is market_basket.response{}. Focusing on the response portion, the first response has an "02" field and a "dataset". The next response{1} has fields 02,03,04,05,08,etc etc., same with response{2} and response{3} if I do a simple rename | rename market_basket.response.* to Resp_* the fields don't line up. The contents of "Resp_19" should be down 1 line as there was no field 19 in market_basket.response{0}.  See here:  If I change the query to this   | spath path=market_basket.response{} output=Response | spath input=Response | table tran_id 0* 1* 2* dataset   Then I only get the first row, the other 3 rows don't show up. The only way that I have been able to get it to work is to address each indices and  field individually   | spath path=market_basket.response{0} output=Resp_0 | spath path=market_basket.response{0}.dataset output=Resp_0_dataset | spath path=market_basket.response{0}.02 output=Resp_0_02 | spath path=market_basket.response{1} output=Resp_1 | spath path=market_basket.response{1}.dataset output=Resp_1_dataset | spath path=market_basket.response{1}.01 output=Resp_1_01 | spath path=market_basket.response{1}.02 output=Resp_1_02 | spath path=market_basket.response{1}.03 output=Resp_1_03 | spath path=market_basket.response{1}.04 output=Resp_1_04 | spath path=market_basket.response{1}.05 output=Resp_1_05 | spath path=market_basket.response{1}.06 output=Resp_1_06 | spath path=market_basket.response{1}.07 output=Resp_1_07 | spath path=market_basket.response{1}.08 output=Resp_1_08 | spath path=market_basket.response{1}.09 output=Resp_1_09 | spath path=market_basket.response{1}.10 output=Resp_1_10 | spath path=market_basket.response{1}.11 output=Resp_1_11 | spath path=market_basket.response{1}.12 output=Resp_1_12 | spath path=market_basket.response{1}.13 output=Resp_1_13 | spath path=market_basket.response{1}.14 output=Resp_1_14 | spath path=market_basket.response{1}.15 output=Resp_1_15 | spath path=market_basket.response{1}.16 output=Resp_1_16 | spath path=market_basket.response{1}.17 output=Resp_1_17 | spath path=market_basket.response{1}.18 output=Resp_1_18 | spath path=market_basket.response{1}.19 output=Resp_1_19 | spath path=market_basket.response{1}.20 output=Resp_1_20 | spath path=market_basket.response{1}.21 output=Resp_1_21 ...   But with up to 60 responses with 20 fields per transaction, that many spaths would be a non-starter. Especially considering that I need to factor in the request portions too at some point. Finally, to give an example use case, I want to be able to check field 19 on the response  and if the flag starts with "NN" or "NY", then put out an alert: "Item".market_basket{whatever #}.02." has been not been cleared for sale". Flags are:".market_basket{whatever #].19 I know that was a lot of detail, but I wanted to make sure that I put down the different ways that I tried. Any help would be much appreciated!