All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two events that are semi-colon separated key value pairs. I have applied the extract command to parse the event into key value pairs. The aim is to compare the two events using the key and hig... See more...
I have two events that are semi-colon separated key value pairs. I have applied the extract command to parse the event into key value pairs. The aim is to compare the two events using the key and highlight the differences in values in a table format with the key as header and values as rows Event 1 35=D; 54=2; 40=1; 11=abc Event 2 35=G; 54=2; 40=2; 11=xyz Result 35|40|11 D|1|abc G|2|xyz Which function will index my keys so that I may compare their values and report in the above format? Extraction performed as follows: <search> | extract pairdelim=";" kvdelim="\=" clean_keys=false
I'm deployed a Splunk in VM. How to get the instance application Splunk metrics in Prometheus.
There are 2000 dashboards in Splunk. Out of which, some are used and some are not. How to check that? How to migrate it to ELK by filtering out unused or junk dashboards.
Hello, We are in the process of ingesting Palo Alto logs from a separate organization’s network into our instance of Spunk Enterprise Security (on-prem) which resides on another network. Connectivi... See more...
Hello, We are in the process of ingesting Palo Alto logs from a separate organization’s network into our instance of Spunk Enterprise Security (on-prem) which resides on another network. Connectivity between both of our organizations is facilitated through an interconnection provided by a product called Equinix. This way, data and file interchanges between our organizations are secure over the internet. I’m trying to determine a performance and cost-efficient method of ingesting the other organization’s logs into our network. We’re ingesting our internal organization Palo Alto FW logs by forwarding these to a syslog server, and they’re sent to our Spunk indexers from there. How different would the log ingestion mechanism for an external org’s Palo Alto logs be? Any help would be greatly appreciated!
Hello, I know we can send alerts from Splunk to BMC TrueSight. But i would like to get help on sending the events generated at BMC TrueSight to Splunk. Please help to know if there is any way t... See more...
Hello, I know we can send alerts from Splunk to BMC TrueSight. But i would like to get help on sending the events generated at BMC TrueSight to Splunk. Please help to know if there is any way to do it.   Thanks!
Hi Team/ @Anonymous  I have tried Instrumenting  .Net standalone application with the help of the below reference link  https://docs.appdynamics.com/display/PRO41/Instrument+Windows+Services+and... See more...
Hi Team/ @Anonymous  I have tried Instrumenting  .Net standalone application with the help of the below reference link  https://docs.appdynamics.com/display/PRO41/Instrument+Windows+Services+and+Standalone+Applications I could load the profiler and was able to see the application under the "Tiers & Nodes" but was unable to get the transaction snapshots or metrics in the appdynamics controller. I see the below error in the AgentLog.txt, 2022-02-10 08:41:24.0104 21836 AppDynamics.Coordinator 1 27 Error MachineAgentManager Metrics Error sending metrics - will requeue for later transmission Exception: com.appdynamics.ee.agent.commonservices.metricgeneration.metrics.MetricSendException: System.Exception: Failed to execute request to endpoint [https://****************.saas.appdynamics.com:443/controller/instance/76787/metrics_PB_]. For more details if needed please set trace level for logger [com.appdynamics.REST.RESTProtobufCommunicator] ---> System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. ---> System.IO.IOException: The handshake failed due to an unexpected packet format. at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- Note: I can get metrics for the .Net applications deployed in the IIS  Please look into this issue and help us to resolve this. Thanks in advance
Hi, all! How could I make this pattern "HKL20167991SIT_7_8299=true" from my log files into 'XXXX'(the last four digits) as the key and 'true/false' as the value? Here's my log file: HKL20167991... See more...
Hi, all! How could I make this pattern "HKL20167991SIT_7_8299=true" from my log files into 'XXXX'(the last four digits) as the key and 'true/false' as the value? Here's my log file: HKL20167991SIT_7_8299=true, HKL20167991SIT_8_8260=true, HKL20167991SIT_4_8296=true, HKL20167991SIT_26_8274=true, HKL20167991SIT_32_827A=true, HKL20167991SIT_29_8277=true, HKL20167991SIT_35_827D=true, HKL20167991SIT_22_828E=true, HKL20167991SIT_24_8272=true, HKL20167991SIT_1_825A=true, HKL20167991SIT_31_8279=true, HKL20167991SIT_9_8261=true, HKL20167991SIT_11_8263=true, HKL20167991SIT_14_8266=true, HKL20167991SIT_27_8275=true, HKL20167991SIT_17_8269=true, HKL20167991SIT_37_827F=true, HKL20167991SIT_28_8276=true, HKL20167991SIT_34_827C=true, HKL20167991SIT_20_827C=true, HKL20167991SIT_25_8273=true, HKL20167991SIT_12_8264=true, HKL20167991SIT_15_8267=true, HKL20167991SIT_5_8297=true, HKL20167991SIT_19_826B=true, HKL20167991SIT_3_8295=true, HKL20167991SIT_10_8262=true, HKL20167991SIT_13_8265=true, HKL20167991SIT_18_826A=true, HKL20167991SIT_16_8268=true, HKL20167991SIT_33_827B=true, HKL20167991SIT_36_827E=true, HKL20167991SIT_2_825B=true, HKL20167991SIT_21_827D=true, HKL20167991SIT_23_828F=true, HKL20167991SIT_30_8278=true, HKL20167991SIT_6_8298=true The result I need is: Port  Status 8299 true 827D true 8278 true
I'm using Splunk Enterprise 8.2.4 and trying to get my forwarders to forward perfmon counters (CPU, Disk Space etc.) into a metrics index at my indexer cluster. It seems to me from reading here: ... See more...
I'm using Splunk Enterprise 8.2.4 and trying to get my forwarders to forward perfmon counters (CPU, Disk Space etc.) into a metrics index at my indexer cluster. It seems to me from reading here: This requires the Splunk App for Infrastructure (which is EOL I think?) or the Splunk AddOn for windows? This might affect existing collection of metrics that are being indexed as events This is a configuration done at the indexer and not the forwarder In short, I'm confused on how to achieve this! Any assistance would be much appreciated!
I have a requirement to move indexed data from index-A to another index-B in a smart-store enabled cluster. Both indexes (A & B ) has data in the AWS s3-bucket.  I would like to know if the below ste... See more...
I have a requirement to move indexed data from index-A to another index-B in a smart-store enabled cluster. Both indexes (A & B ) has data in the AWS s3-bucket.  I would like to know if the below steps works? Steps : Stop the incoming data to index-A Roll the hot bucket  on index-A Move  the data from s3 for index-A  to index-B Using the aws s3 sync  s3://bucket-name/index-A  s3://bucket-name/index-B   Run bootstrap command on CM.
Hi All, I have configured non-IIS Windows services to be monitored under different tiers in my AppDynamics setup. How do I setup alerting on them, so when a service is stopped or restarted, I get a... See more...
Hi All, I have configured non-IIS Windows services to be monitored under different tiers in my AppDynamics setup. How do I setup alerting on them, so when a service is stopped or restarted, I get an alert? Thank you in advance Best, Adi
I am sure this is a pretty common use case, mainly because IP addresses move, the data is not static so for security retro hunts etc or even just searching a few days of data, the Geo data needs to b... See more...
I am sure this is a pretty common use case, mainly because IP addresses move, the data is not static so for security retro hunts etc or even just searching a few days of data, the Geo data needs to be static in the data and can't be a search lookup.  Technically i can't even think of a use case where you would ever want Geo data to be a search lookup but I am sure there are some use cases out there.  Elasticsearch has a couple options to do this, IE ingest nodes or logstash so I am sure a millions people are doing this in Splunk. If someone could point me at the documentation I would appreciate it. Closest thing I could find is ingest time eval but not sure how that does geoip enrichment
Warning:  Long, detailed explanation ahead.    Summary version is that I have a nested json arrays and fields that I am having an issue with extracting properly into individual fields.  The chosen ... See more...
Warning:  Long, detailed explanation ahead.    Summary version is that I have a nested json arrays and fields that I am having an issue with extracting properly into individual fields.  The chosen fields will change over time, based on external factors so I need to be able to extract and report on all of them, with the ability to identify the array index (i.e. {0}, or {1}, etc).   No solution that I have looked at or come up with is working for me, so I am turning to you smarter folks to help. Detail: I have a nested json arrays and fields that I am having an issue with extracting properly into individual fields.   The end result is that I want to be able to place alerts or report on various fields that are deemed interesting. These are "request" and "response" arrays in each transaction (think checking items in a shopping cart for various flag and indicators).    The chosen fields will change over time, based on external factors so I need to be able to extract them from the array  and report on all of them at some point. Here is a sample request and response As you can see the the request array is market_basket.request{} and the response is market_basket.response{}. Focusing on the response portion, the first response has an "02" field and a "dataset". The next response{1} has fields 02,03,04,05,08,etc etc., same with response{2} and response{3} if I do a simple rename | rename market_basket.response.* to Resp_* the fields don't line up. The contents of "Resp_19" should be down 1 line as there was no field 19 in market_basket.response{0}.  See here:  If I change the query to this   | spath path=market_basket.response{} output=Response | spath input=Response | table tran_id 0* 1* 2* dataset   Then I only get the first row, the other 3 rows don't show up. The only way that I have been able to get it to work is to address each indices and  field individually   | spath path=market_basket.response{0} output=Resp_0 | spath path=market_basket.response{0}.dataset output=Resp_0_dataset | spath path=market_basket.response{0}.02 output=Resp_0_02 | spath path=market_basket.response{1} output=Resp_1 | spath path=market_basket.response{1}.dataset output=Resp_1_dataset | spath path=market_basket.response{1}.01 output=Resp_1_01 | spath path=market_basket.response{1}.02 output=Resp_1_02 | spath path=market_basket.response{1}.03 output=Resp_1_03 | spath path=market_basket.response{1}.04 output=Resp_1_04 | spath path=market_basket.response{1}.05 output=Resp_1_05 | spath path=market_basket.response{1}.06 output=Resp_1_06 | spath path=market_basket.response{1}.07 output=Resp_1_07 | spath path=market_basket.response{1}.08 output=Resp_1_08 | spath path=market_basket.response{1}.09 output=Resp_1_09 | spath path=market_basket.response{1}.10 output=Resp_1_10 | spath path=market_basket.response{1}.11 output=Resp_1_11 | spath path=market_basket.response{1}.12 output=Resp_1_12 | spath path=market_basket.response{1}.13 output=Resp_1_13 | spath path=market_basket.response{1}.14 output=Resp_1_14 | spath path=market_basket.response{1}.15 output=Resp_1_15 | spath path=market_basket.response{1}.16 output=Resp_1_16 | spath path=market_basket.response{1}.17 output=Resp_1_17 | spath path=market_basket.response{1}.18 output=Resp_1_18 | spath path=market_basket.response{1}.19 output=Resp_1_19 | spath path=market_basket.response{1}.20 output=Resp_1_20 | spath path=market_basket.response{1}.21 output=Resp_1_21 ...   But with up to 60 responses with 20 fields per transaction, that many spaths would be a non-starter. Especially considering that I need to factor in the request portions too at some point. Finally, to give an example use case, I want to be able to check field 19 on the response  and if the flag starts with "NN" or "NY", then put out an alert: "Item".market_basket{whatever #}.02." has been not been cleared for sale". Flags are:".market_basket{whatever #].19 I know that was a lot of detail, but I wanted to make sure that I put down the different ways that I tried. Any help would be much appreciated!
This event is printed eveytime UserPin AreaCode AreaNum Sector Short Sem are unique for each userid and come only inside User Login successfully message with timestamp "message":" *** User Login suc... See more...
This event is printed eveytime UserPin AreaCode AreaNum Sector Short Sem are unique for each userid and come only inside User Login successfully message with timestamp "message":" *** User Login successfully credentials userid 2NANO-323254-7654-4 UserPin - 287654 AreaCode - 98765 AreaNum - 98765 Sector - 87612345 Short Sem - ZEB" Below these two event are only printed when certain conditions are meet. I am very new in Splunk like a naive, how can we write a Splunk query such that take out the userid with UserPin AreaCode AreaNum Sector Short Sem which have the below printed event then only create a table with userid. If below two message are not printed with userid from above message then we should not consider the userid "message": "User Failed to login userid - 2NANO-323254-7654-4" "message": "User is from stackoverflow group, on XZ ABCE for userid - 2NAN0-323254-7654-4" this is table structure where i want to fill values UserId | UserPin | AreaCode | AreaNum | Sector | Short_Sem I am very new in splunk can someone guide how to start to build where to look for the thing. Any hint or demo will work. Thank you Example "message":" *** User Login successfully credentials userid 2NANO-323254-7654-4 UserPin - 287654 AreaCode - 98765 AreaNum - 98765 Sector - 87612345 Short Sem - ZEB" "message": "User Failed to login userid - 2NANO-323254-7654-4" "message": "User is from stackoverflow group, on XZ ABCE for userid - 2NAN0-323254-7654-4" "message":" *** User Login successfully credentials userid 2ABDO-54312-7654-4 UserPin - 287654 AreaCode - 98765 AreaNum - 98765 Sector - 87612345 Short Sem - ZEB" "message":" *** User Login successfully credentials userid 2COMA-765234-8653-4 UserPin - 287654 AreaCode - 98765 AreaNum - 98765 Sector - 87612345 Short Sem - ZEB" So we consider first only because that userid have has two more event with same userid and associated all the event have timestamp UserId | UserPin| AreaCode | AreaNum | Sector | Short_Sem 2NANO-323254-7654-4 | 287654 | 98765 | 98765 | 87612345 | ZEB
I am setting up TCP with TLS. Currently I have a Syslog Server sending data to my Splunk Instance but my Message is being rejected:     02-09-2022 11:15:13.039 -0800 ERROR TcpInputProc [3972 Fw... See more...
I am setting up TCP with TLS. Currently I have a Syslog Server sending data to my Splunk Instance but my Message is being rejected:     02-09-2022 11:15:13.039 -0800 ERROR TcpInputProc [3972 FwdDataReceiverThread] - Message rejected. Received unexpected message of size=1009989694 bytes from src=myserver.com:1571 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.       Below is my Inputs.conf     [script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled = 0 [splunktcp-ssl:514] sourcetype = syslog [SSL] serverCert = C:\Program Files\Splunk\etc\auth\server.pem sslVersions = tls1.2 cipherSuite = ECDHE-RSA-AES256-GCM-SHA384 sslPassword = PASSWORD       Any help is appreciated Thank you, Marco
Hello, I am very new to Splunk but trying to figure a few things out. I have been tasked with building a search so that I can monitor who accesses a directory on a Linux box. I am very unsure how t... See more...
Hello, I am very new to Splunk but trying to figure a few things out. I have been tasked with building a search so that I can monitor who accesses a directory on a Linux box. I am very unsure how to start this process. I don't have the path that I would like to monitor either but I have been asked to get something basic so that I can just put in the path to the directory when they set it up.    I realize this is not a lot of information but I would appreciate any help!   Thank you in advance!
We're running Splunk 8.1.7.2. I am an admin. I have created a lookup file (my_lookup.csv), and lookup definition (my_lookup) referencing that file, in an app (my_app). Both the lookup file and defini... See more...
We're running Splunk 8.1.7.2. I am an admin. I have created a lookup file (my_lookup.csv), and lookup definition (my_lookup) referencing that file, in an app (my_app). Both the lookup file and definition have permission set to "All Apps (system)" and "Everyone Read", write is for admin only. When I run the following searches I see contents of the lookup files as expected: | inputlookup my_lookup.csv OR | inputlookup my_lookup However, when my users attempts to run the search above, they get the following errors: -"The lookup table 'my_lookup.csv' requires a .csv or KV store lookup definition." -"The lookup table 'my_lookup' is invalid." I don't understand how this could be. Also, it's worth pointing out the user used to be able to get results. What permission or capability isn't set properly? Any help is greatly appreciated. Thanks.
Hi, I am not able to locate the Akamai SIEM API option in the Data inputs for Akamai App. Can someone please help? As I am not able to ingest logs of Akamai may be due to this API issue.
Good afternoon, I have a Cortex XDR input configured in my Palo Alto Networks add-on. I want to deploy some use cases my company has developed and I need "vendor_action" and "file_hash". Lookin... See more...
Good afternoon, I have a Cortex XDR input configured in my Palo Alto Networks add-on. I want to deploy some use cases my company has developed and I need "vendor_action" and "file_hash". Looking inside the events, neither of these fields are present in the JSON data indexed from the add-on. Do anyone knows why this happens? I'm using 7.0.2 version. Thanks a lot for the help in advance!
Hi folks,  I have an issue where I wish to ignore ALL 403 and 401 HTTP errors for my application. Originally we had 401 set in the Error detection section (Error Detection Using HTTP Return Codes) ... See more...
Hi folks,  I have an issue where I wish to ignore ALL 403 and 401 HTTP errors for my application. Originally we had 401 set in the Error detection section (Error Detection Using HTTP Return Codes) that was enabled, and I was able to add a 403 one, keep it disabled and then the following day my 403 errors were ignored, that worked. Trawling through more errors I found that 401 errors are also not actual errors for us to I wanted to ignore both 401 and 403 - so I went back to the same error detection item and disabled 401 as well. So now I had them both set and disabled. However that didn't seem to work and I had both of them back again I also tried to add in a 404 to keep enabled (incase it needed one there to be enabled), this didn't help as wells as combining the 401 and 403 into a single item using the range Right now I am back to having 401, 403 and 404 separately with 401 and 403 being disabled and 404 being enabled (I do want to capture 404's) I am quite new to this level of AppDynamics and wondered if someone could spot what I am doing wrong. Thanks
Can someone walk me through the steps of ingesting data into splunk cloud. I have read the documentation but it gets confusing.