All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Warning:  Long, detailed explanation ahead.    Summary version is that I have a nested json arrays and fields that I am having an issue with extracting properly into individual fields.  The chosen ... See more...
Warning:  Long, detailed explanation ahead.    Summary version is that I have a nested json arrays and fields that I am having an issue with extracting properly into individual fields.  The chosen fields will change over time, based on external factors so I need to be able to extract and report on all of them, with the ability to identify the array index (i.e. {0}, or {1}, etc).   No solution that I have looked at or come up with is working for me, so I am turning to you smarter folks to help. Detail: I have a nested json arrays and fields that I am having an issue with extracting properly into individual fields.   The end result is that I want to be able to place alerts or report on various fields that are deemed interesting. These are "request" and "response" arrays in each transaction (think checking items in a shopping cart for various flag and indicators).    The chosen fields will change over time, based on external factors so I need to be able to extract them from the array  and report on all of them at some point. Here is a sample request and response As you can see the the request array is market_basket.request{} and the response is market_basket.response{}. Focusing on the response portion, the first response has an "02" field and a "dataset". The next response{1} has fields 02,03,04,05,08,etc etc., same with response{2} and response{3} if I do a simple rename | rename market_basket.response.* to Resp_* the fields don't line up. The contents of "Resp_19" should be down 1 line as there was no field 19 in market_basket.response{0}.  See here:  If I change the query to this   | spath path=market_basket.response{} output=Response | spath input=Response | table tran_id 0* 1* 2* dataset   Then I only get the first row, the other 3 rows don't show up. The only way that I have been able to get it to work is to address each indices and  field individually   | spath path=market_basket.response{0} output=Resp_0 | spath path=market_basket.response{0}.dataset output=Resp_0_dataset | spath path=market_basket.response{0}.02 output=Resp_0_02 | spath path=market_basket.response{1} output=Resp_1 | spath path=market_basket.response{1}.dataset output=Resp_1_dataset | spath path=market_basket.response{1}.01 output=Resp_1_01 | spath path=market_basket.response{1}.02 output=Resp_1_02 | spath path=market_basket.response{1}.03 output=Resp_1_03 | spath path=market_basket.response{1}.04 output=Resp_1_04 | spath path=market_basket.response{1}.05 output=Resp_1_05 | spath path=market_basket.response{1}.06 output=Resp_1_06 | spath path=market_basket.response{1}.07 output=Resp_1_07 | spath path=market_basket.response{1}.08 output=Resp_1_08 | spath path=market_basket.response{1}.09 output=Resp_1_09 | spath path=market_basket.response{1}.10 output=Resp_1_10 | spath path=market_basket.response{1}.11 output=Resp_1_11 | spath path=market_basket.response{1}.12 output=Resp_1_12 | spath path=market_basket.response{1}.13 output=Resp_1_13 | spath path=market_basket.response{1}.14 output=Resp_1_14 | spath path=market_basket.response{1}.15 output=Resp_1_15 | spath path=market_basket.response{1}.16 output=Resp_1_16 | spath path=market_basket.response{1}.17 output=Resp_1_17 | spath path=market_basket.response{1}.18 output=Resp_1_18 | spath path=market_basket.response{1}.19 output=Resp_1_19 | spath path=market_basket.response{1}.20 output=Resp_1_20 | spath path=market_basket.response{1}.21 output=Resp_1_21 ...   But with up to 60 responses with 20 fields per transaction, that many spaths would be a non-starter. Especially considering that I need to factor in the request portions too at some point. Finally, to give an example use case, I want to be able to check field 19 on the response  and if the flag starts with "NN" or "NY", then put out an alert: "Item".market_basket{whatever #}.02." has been not been cleared for sale". Flags are:".market_basket{whatever #].19 I know that was a lot of detail, but I wanted to make sure that I put down the different ways that I tried. Any help would be much appreciated!
This event is printed eveytime UserPin AreaCode AreaNum Sector Short Sem are unique for each userid and come only inside User Login successfully message with timestamp "message":" *** User Login suc... See more...
This event is printed eveytime UserPin AreaCode AreaNum Sector Short Sem are unique for each userid and come only inside User Login successfully message with timestamp "message":" *** User Login successfully credentials userid 2NANO-323254-7654-4 UserPin - 287654 AreaCode - 98765 AreaNum - 98765 Sector - 87612345 Short Sem - ZEB" Below these two event are only printed when certain conditions are meet. I am very new in Splunk like a naive, how can we write a Splunk query such that take out the userid with UserPin AreaCode AreaNum Sector Short Sem which have the below printed event then only create a table with userid. If below two message are not printed with userid from above message then we should not consider the userid "message": "User Failed to login userid - 2NANO-323254-7654-4" "message": "User is from stackoverflow group, on XZ ABCE for userid - 2NAN0-323254-7654-4" this is table structure where i want to fill values UserId | UserPin | AreaCode | AreaNum | Sector | Short_Sem I am very new in splunk can someone guide how to start to build where to look for the thing. Any hint or demo will work. Thank you Example "message":" *** User Login successfully credentials userid 2NANO-323254-7654-4 UserPin - 287654 AreaCode - 98765 AreaNum - 98765 Sector - 87612345 Short Sem - ZEB" "message": "User Failed to login userid - 2NANO-323254-7654-4" "message": "User is from stackoverflow group, on XZ ABCE for userid - 2NAN0-323254-7654-4" "message":" *** User Login successfully credentials userid 2ABDO-54312-7654-4 UserPin - 287654 AreaCode - 98765 AreaNum - 98765 Sector - 87612345 Short Sem - ZEB" "message":" *** User Login successfully credentials userid 2COMA-765234-8653-4 UserPin - 287654 AreaCode - 98765 AreaNum - 98765 Sector - 87612345 Short Sem - ZEB" So we consider first only because that userid have has two more event with same userid and associated all the event have timestamp UserId | UserPin| AreaCode | AreaNum | Sector | Short_Sem 2NANO-323254-7654-4 | 287654 | 98765 | 98765 | 87612345 | ZEB
I am setting up TCP with TLS. Currently I have a Syslog Server sending data to my Splunk Instance but my Message is being rejected:     02-09-2022 11:15:13.039 -0800 ERROR TcpInputProc [3972 Fw... See more...
I am setting up TCP with TLS. Currently I have a Syslog Server sending data to my Splunk Instance but my Message is being rejected:     02-09-2022 11:15:13.039 -0800 ERROR TcpInputProc [3972 FwdDataReceiverThread] - Message rejected. Received unexpected message of size=1009989694 bytes from src=myserver.com:1571 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload.       Below is my Inputs.conf     [script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled = 0 [splunktcp-ssl:514] sourcetype = syslog [SSL] serverCert = C:\Program Files\Splunk\etc\auth\server.pem sslVersions = tls1.2 cipherSuite = ECDHE-RSA-AES256-GCM-SHA384 sslPassword = PASSWORD       Any help is appreciated Thank you, Marco
Hello, I am very new to Splunk but trying to figure a few things out. I have been tasked with building a search so that I can monitor who accesses a directory on a Linux box. I am very unsure how t... See more...
Hello, I am very new to Splunk but trying to figure a few things out. I have been tasked with building a search so that I can monitor who accesses a directory on a Linux box. I am very unsure how to start this process. I don't have the path that I would like to monitor either but I have been asked to get something basic so that I can just put in the path to the directory when they set it up.    I realize this is not a lot of information but I would appreciate any help!   Thank you in advance!
We're running Splunk 8.1.7.2. I am an admin. I have created a lookup file (my_lookup.csv), and lookup definition (my_lookup) referencing that file, in an app (my_app). Both the lookup file and defini... See more...
We're running Splunk 8.1.7.2. I am an admin. I have created a lookup file (my_lookup.csv), and lookup definition (my_lookup) referencing that file, in an app (my_app). Both the lookup file and definition have permission set to "All Apps (system)" and "Everyone Read", write is for admin only. When I run the following searches I see contents of the lookup files as expected: | inputlookup my_lookup.csv OR | inputlookup my_lookup However, when my users attempts to run the search above, they get the following errors: -"The lookup table 'my_lookup.csv' requires a .csv or KV store lookup definition." -"The lookup table 'my_lookup' is invalid." I don't understand how this could be. Also, it's worth pointing out the user used to be able to get results. What permission or capability isn't set properly? Any help is greatly appreciated. Thanks.
Hi, I am not able to locate the Akamai SIEM API option in the Data inputs for Akamai App. Can someone please help? As I am not able to ingest logs of Akamai may be due to this API issue.
Good afternoon, I have a Cortex XDR input configured in my Palo Alto Networks add-on. I want to deploy some use cases my company has developed and I need "vendor_action" and "file_hash". Lookin... See more...
Good afternoon, I have a Cortex XDR input configured in my Palo Alto Networks add-on. I want to deploy some use cases my company has developed and I need "vendor_action" and "file_hash". Looking inside the events, neither of these fields are present in the JSON data indexed from the add-on. Do anyone knows why this happens? I'm using 7.0.2 version. Thanks a lot for the help in advance!
Hi folks,  I have an issue where I wish to ignore ALL 403 and 401 HTTP errors for my application. Originally we had 401 set in the Error detection section (Error Detection Using HTTP Return Codes) ... See more...
Hi folks,  I have an issue where I wish to ignore ALL 403 and 401 HTTP errors for my application. Originally we had 401 set in the Error detection section (Error Detection Using HTTP Return Codes) that was enabled, and I was able to add a 403 one, keep it disabled and then the following day my 403 errors were ignored, that worked. Trawling through more errors I found that 401 errors are also not actual errors for us to I wanted to ignore both 401 and 403 - so I went back to the same error detection item and disabled 401 as well. So now I had them both set and disabled. However that didn't seem to work and I had both of them back again I also tried to add in a 404 to keep enabled (incase it needed one there to be enabled), this didn't help as wells as combining the 401 and 403 into a single item using the range Right now I am back to having 401, 403 and 404 separately with 401 and 403 being disabled and 404 being enabled (I do want to capture 404's) I am quite new to this level of AppDynamics and wondered if someone could spot what I am doing wrong. Thanks
Can someone walk me through the steps of ingesting data into splunk cloud. I have read the documentation but it gets confusing.
I recently updated Splunk_TA_Windows and am seeing this error on my search head cluster: [Indexers] Could not load lookup=LOOKUP-user_account_control_property This is an automatic lookup generat... See more...
I recently updated Splunk_TA_Windows and am seeing this error on my search head cluster: [Indexers] Could not load lookup=LOOKUP-user_account_control_property This is an automatic lookup generated in the default directory of the app. I'm not familiar with it and am not seeing this error on my deployer(standalone) instance. The configs appear the same. Any help is greatly appreciated. Thanks  
Hi, I'm trying to pull in Windows Event logs from the Windows PowerShell path. This path includes 800s, which I've seen in event viewer so I know they're generated and stored here. I just can't see... See more...
Hi, I'm trying to pull in Windows Event logs from the Windows PowerShell path. This path includes 800s, which I've seen in event viewer so I know they're generated and stored here. I just can't seem to pull anything and I don't see much help on the internet to pulling this path. This is my inputs.conf:  [WinEventLog://Windows PowerShell/] disabled=0   Note: This is different from the other PowerShell path where I get my 4103 and 4104 codes:  [WinEventLog://Microsoft-Windows-PowerShell/Operational] disabled=0   Any helps is appreciated. Thanks.
Hello,  I need your help please, I have two tables resulting from two searches and I need to join these two tables to make a cumulative bar chart according to date. My tables are  What... See more...
Hello,  I need your help please, I have two tables resulting from two searches and I need to join these two tables to make a cumulative bar chart according to date. My tables are  What I want to achieve is: Datum A1 A2 A3 A4 A5 A6 2022-02-08 5.7   3.7 1.9 4.56 90.3      
In the query  _time is already formatted. But when i try to export the data in csv its showing different formats.  Query: index="win event" host IN (USMDCKPAP30074) Event=6006 OR Event="6005" Ty... See more...
In the query  _time is already formatted. But when i try to export the data in csv its showing different formats.  Query: index="win event" host IN (USMDCKPAP30074) Event=6006 OR Event="6005" Type=Information | eval Uptime = if(Event=6005,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()) | table host Uptime Eg: 2022-31-01 10:00:42 2022-29-01 06:40:11 2022-27-01 12:55:56 After exporting : 8/1/2022 4:08 1/1/2022 4:03 2021-25-12 04:03:29 2021-18-12 04:02:54 2021-16-12 10:14:45 2021-16-12 10:08:21 11/12/2021 4:08 4/12/2021 4:11  
Hi All, We have a number of micro services with correlation id flowing across the request and responses. What i'm trying to do is to create a flow of request and response for 1 correlation id. Exam... See more...
Hi All, We have a number of micro services with correlation id flowing across the request and responses. What i'm trying to do is to create a flow of request and response for 1 correlation id. Example log correlation id time source message 123 12:00:00 Service A Enter service A 123 12:00:01 Service A Calling Service B 123 12:00:02 Service B Routing to Service C 123 12:00:03 Service C Result Found. Response User 1 123 12:00:04 Service B Using User 1 to find resource 123 12:00:05 Service B Resource Found. Calling Service D 123 12:00:06 Service D Sub-resource not found. Response: null 123 12:00:07 Service B Return result. Response User1, resource1 123 12:00:08 Service A Return User1, resource1   From the example log, i would like to be able to group Service A (12:00:00 -12:00:01) Service B (12:00:02) Service C (12:00:03) Service B (12:00:04 -12:00:05) Service D (12:00:06) Service B (12:00:07) Service A (12:00:08) What i'm trying to do right now is a simple event results first before going to any further fancy visualization. I tried using Transaction but i can't separate the source when there's a different call in between. Here's the query that i've tried   123 | eval _time=strptime(timegenerated,"%Y-%m-%dT%H:%M:%SZ") | sort - _time | transaction source   Any help is greatly appreciated. Thanks, Allen
Follow-on from my previous question . I ended up using a slightly different solution involving  match for the case criteria. Since the query inputs are being provided by token values to a Splunk St... See more...
Follow-on from my previous question . I ended up using a slightly different solution involving  match for the case criteria. Since the query inputs are being provided by token values to a Splunk Studio dashboard, I would not be able to properly break up and quote each term of a multi-value text input. By using match, I can just tell users to use | as a separator instead and run a search like: | eval state=case(match(foo, "^($foo_token$)$") AND match(bar, "^($bar_token$)$"), 1, NOT match(foo, "^($foo_token$)$") AND NOT match(bar, "^($bar_token$)$"), 2, 1=1, 0) However, the table cannot run this search. Even if both foo and bar have input values, the table shows "Waiting for input." If I escape the end-of-line match character like $$ or like \$, I see the same "Waiting for input." If I use only: | eval state=case(match(foo, "^($foo_token$)$"), 1, 1=1, 0) The search runs and produces expected results, so it seems to be a problem with having 2 or more $s. I want to search for whole-line values of fields only. How can I do this?
Hi Splunk Community,  I need some help with the following query: (index=* OR index=*) (sourcetype=A OR sourcetype=C OR sourcetype=D) (a_location=* OR b_location=* OR c_location=* OR d_location=*) (... See more...
Hi Splunk Community,  I need some help with the following query: (index=* OR index=*) (sourcetype=A OR sourcetype=C OR sourcetype=D) (a_location=* OR b_location=* OR c_location=* OR d_location=*) (a_location!=*S1* OR b_location!=*S1* OR c_location!=*S1* OR d_location!=*S1*) User!=basketball UserGroup!=baseball | eval Interface_card=mvappend(a_location,b_location,c_location,d_location) | mvexpand Interface_card | bin span=1d _time | stats sum(TCDuration) as TCDuration by _time Interface_card | eval TCDuration=TCDuration/1000 | eval Utilization=round(((TCDuration/86400)*100),1) | eval Utilization=if(Utilization >100, 100, Utilization) | fields - TCDuration | timechart eval(round(avg(Utilization),1)) by Interface_card limit=0 1. How can I optimized it 2. how can I filter only Utilization between 0-40 and/or 70-99 or any other limit I want to filter... Appreciate any help thank you  
When on certain pages, my search head says "Loading" and this never goes away. This also causes weird behaviour and javascript errors when viewing certain Splunk administrative pages, such as App ins... See more...
When on certain pages, my search head says "Loading" and this never goes away. This also causes weird behaviour and javascript errors when viewing certain Splunk administrative pages, such as App installation or LDAP settings. Looking into the code, I find the following:   <script type="text/javascript"> //<![CDATA[ this.messenger = Splunk.Messenger.System.getInstance(); // a misconfigured hierarchy can often derail the module loading, so the 'Loading' string can get stuck there. $("#loading").hide(); //]]> </script>   It says: "a misconfigured hierarchy can often derail the module loading, so the 'Loading' string can get stuck there." What does this mean? How can I see what is the misconfiguration in the hierarchy? What can I look at to see failed module loading? I'm sure that there is a Splunk app that is causing this but, short of trial and error, I am unsure on what could be causing this. Any help would be very useful! Thanks!
Hi there- I have a simple dashboard that allows me to see growth around the number of Live / Archived accounts we manage in Google. We currently have a daily pull of the directory service into Splu... See more...
Hi there- I have a simple dashboard that allows me to see growth around the number of Live / Archived accounts we manage in Google. We currently have a daily pull of the directory service into Splunk, which allows for the following query to be run (I have a few like this with Archived / Live being the adjustments I make): index="google" sourcetype="*directory*" "emails{}.address"="*@mydomain.com"  | timechart count by archived span=1d cont=FALSE In the last week or so we have had some issues in that sometimes we get two or three directory pulls into Splunk, which results in the graph displaying double / triple the count of data (see attached image) My question is as follows: Are there any additional variables I can add into my query to ONLY interpret one data pull per 24 hour period?    This will allow for consistent reporting in the face of inconsistent directory pulls into Splunk. I have poked around a bit with Timechart but feel I perhaps I should be using a stats command instead...?  any direction on which approach to use is appreciated!
Hi, all! I have a table I want to be like this How can I do this? Can help me?  
I need to get the list of .conf files. On running my below Splunk Query, "| rest /services/configs/conf-props" it returns the conf objects, but I need to find the .conf files instead of objec... See more...
I need to get the list of .conf files. On running my below Splunk Query, "| rest /services/configs/conf-props" it returns the conf objects, but I need to find the .conf files instead of objects. Any help would be appreciated! Thanks!