All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I got the output in the form of search is : stats values(status) by id.. Id   status ID Status 1 Agreed N/A Negoiate 2 Agreed Submitted I want to get the values in different... See more...
I got the output in the form of search is : stats values(status) by id.. Id   status ID Status 1 Agreed N/A Negoiate 2 Agreed Submitted I want to get the values in different column as given below ID Status 1 Agreed 1 N/A 1 Negoiate 2 Agreed 2 Submitted  For refference i attached the screenshot below..Can you please Suggest me with the 
I am using appdynamics for the first time (trial account). I have created a .NET app in AppDynamics portal and followed the steps as mentioned while creating the application (also check the video). ... See more...
I am using appdynamics for the first time (trial account). I have created a .NET app in AppDynamics portal and followed the steps as mentioned while creating the application (also check the video). 1. I have also installed the .NET agent on my Laptop. 2. Create sample applications (.NET Core) and hosted in IIS. Generate some load for testing. 3. Agent connection was successful. Issue: I still cannot see any data/statistics on the application dashboard. Did I miss anything?
Hi Splunkers, I'm realtively new with Splunk and trying to understand few aspects and need clarifications on below In Which cases or Applications monitoring does UF needs domain account. Does moni... See more...
Hi Splunkers, I'm realtively new with Splunk and trying to understand few aspects and need clarifications on below In Which cases or Applications monitoring does UF needs domain account. Does monitoring of Domain Controller server events needs domain Account? Is the general approach to keep local account for UFs on servers unless it is explicitly needed? What are roles are used for while installing UFs? What is UF's username and password used for? Is it for security purpose so that anyone doesnt make any changes from CLI? Am I right?
Hi, I have a search below to compare previous 2 days Splunk usage, but I need additional column that computes the difference between 2 dates. How can we achieve this? index=_internal source=*license... See more...
Hi, I have a search below to compare previous 2 days Splunk usage, but I need additional column that computes the difference between 2 dates. How can we achieve this? index=_internal source=*license_usage.log TERM(type=Usage) earliest=-2d@d latest=@d | eval b=round(b/1024/1024/1024,2) | bin span=1d _time | eval date=strftime(_time,"%d_%b") | chart sum(b) AS GB over host by date
Hi,   I have an index that returns alarms with details as  string. I want to define the text in bold  as a field. The string can vary according the the alarm event.  07/10/2020 23:59:06 [37104$0]:... See more...
Hi,   I have an index that returns alarms with details as  string. I want to define the text in bold  as a field. The string can vary according the the alarm event.  07/10/2020 23:59:06 [37104$0]: sndala.cxx.68: disk space is less than threshold ,TYPE=SINGLE, LEVEL=major...   thanks
I'm trying to get results which show randomized filenames but it's giving me randomization in the path directory location. I just want to see randomized filenames. Here's what I have so far:     e... See more...
I'm trying to get results which show randomized filenames but it's giving me randomization in the path directory location. I just want to see randomized filenames. Here's what I have so far:     eventtype=winevent EventCode=4688 (New_Process_Name="C:\\Windows\\TEMP\\*" OR New_Process_Name="C:\\users\\*\\appdata\\local\\temp\\*") NOT New_Process_Name="C:\\Users\\asiaynrf\\AppData\\Local\\Temp\\QIKCache\\*" | lookup ut_shannon_lookup word as New_Process_Name | where ut_shannon > 4.5 | stats values(ut_shannon) as "Shannon Entropy Score" by New_Process_Name,host | rename New_Process_Name as Process,host as Endpoint | sort -"Shannon Entropy Score"      How do I limit randomization to filename and not the whole path?
There's been numerous other questions that I've read through to see if a similar situation has been asked but so far (from what I've gathered) they've not matched my situation, so I figure I'd ask he... See more...
There's been numerous other questions that I've read through to see if a similar situation has been asked but so far (from what I've gathered) they've not matched my situation, so I figure I'd ask here. My goal is to create an alert. index=abc123 Operation=EventTriggered |spath input=Data |fields reid  This will give me reid (relative ID). In the same index (abc123) there's events that have a unique ID (the field name is GUID). So, using the above search's reid value, I want to take that value and search for it in GUID and return events. So, if reid=xyz, I want something along the lines of: index=abc123 GUID=xyz NOT (Operation=EventTriggered) |rename "Parameters{}.Name" AS paramsName "Parameters{}.Value" AS paramsValue |eval params=mvzip(paramsName,paramsValue) |table myfields The issue is that I don't know which value of GUID to search for until I run the first search, and the field values that I care about and want to table are generated from my second search. My question, is, what is a good way to approach this? I don't think I can use a join since reid and GUID are different field names. In the result set of the first search I can't rename reid to GUID because the event with the recorded reid has its own GUID value. Although I can probably do some multivalue field manipulation to overcome that issue? Could I use a subsearch somehow? Maybe in the subsearch, get the value of reid and pass it in that way? Thanks.
Hi, I am trying to create a trending single value however having trouble setting it up. Essentially the stats below sums up VALUE_NUM and works as expected however i would like to compare this to 7d... See more...
Hi, I am trying to create a trending single value however having trouble setting it up. Essentially the stats below sums up VALUE_NUM and works as expected however i would like to compare this to 7d period or with the same previous_value of the time-picker index=main VALUE_NUM>0 | dedup UUID | stats sum(VALUE_NUM) I have tried  index=main VALUE_NUM>0 | dedup UUID | timechart count as sum(VALUE_NUM) span=7d however this isn't returning the correct value   TIA
I would like to apply a formula to each of the values in the field "stocks."  I have been able to show this in a chart, but I need it as a table... what is going on here?   The values in day_hour ... See more...
I would like to apply a formula to each of the values in the field "stocks."  I have been able to show this in a chart, but I need it as a table... what is going on here?   The values in day_hour and stocks are strings.  Flow is a numeric value.  Pct should be a numeric value.      | chart sum(eval(flow*100))AS pct BY day_hour stocks   The charting command produces the following.  This is how I want my table to look.   day_hour stock_name_A stock_name_B stock_name_C 2020-01-01  00:00       2020-01-01  01:00       2020-01-01  02:00         Instead, my table looks like this: day_hour stocks pct 2020-01-01  00:00 stock_name_A   2020-01-01  00:00 stock_name_B   2020-01-01  00:00 stock_name_C   2020-01-01  01:00 stock_name_A   2020-01-01  01:00 stock_name_B   2020-01-01  01:00 stock_name_C   2020-01-01  02:00 stock_name_A   2020-01-01  02:00 stock_name_B   2020-01-01  02:00 stock_name_C    
Hi, I have a relatively simple search, grouping events based on a extracted correlation id like this: | eval id=coalesce(cid, cid2) | | stats values(*) by id However what I need to do now is furt... See more...
Hi, I have a relatively simple search, grouping events based on a extracted correlation id like this: | eval id=coalesce(cid, cid2) | | stats values(*) by id However what I need to do now is further filter down events included in each final row - specifically, I have extacted path1 field on some events and path2 field on others, having the same id. If path1 includes path2, I don't want either event in the aggregated rows.  Tried (prior to stats) something like: |  eventstats values(path1) as AllPath1 by id | where NOT like(AllPath1, "%".path2."%")  ..but for some reason path2 disappears... Any pointers?
Hello Experts, I have the below output for a splunk search, i only want to display "Year-Month" rows 3 months ahead of current Year-Month.. YearMonth Upper95(Prediction) Sep 2020 5 Oct 2... See more...
Hello Experts, I have the below output for a splunk search, i only want to display "Year-Month" rows 3 months ahead of current Year-Month.. YearMonth Upper95(Prediction) Sep 2020 5 Oct 2020 11 Nov 2020 15 Dec 2020 18 Jan 2020 21 Feb 2020 23 Mar 2020 26   I only want to display  the row - Year-Month -- " Jan 2021  " from the above output  ?  If the current YearMonth is November-2020 , i want to display the row  -- " Feb 2021 " Any Help appreciated    Thanks    
what is the strptime format for 2020-09-09T13:04:15.7007091Z
I have one  query which looks like: Query1: index=test "TestRequest" | dedup _time | rex field=_raw "Price\":(?<price>.*?)," | rex field=_raw REQUEST-ID=(?<REQID>.*?)\s | rex field=_raw "Amount\"... See more...
I have one  query which looks like: Query1: index=test "TestRequest" | dedup _time | rex field=_raw "Price\":(?<price>.*?)," | rex field=_raw REQUEST-ID=(?<REQID>.*?)\s | rex field=_raw "Amount\":(?<amount>.*?)}," | rex field=_raw "ItemId\":\"(?<itemId>.*?)\"}" | eval discount=round(exact(price-amount),2) , percent=(discount/price)*100 , time=strftime(_time, "%m-%d-%y %H:%M:%S") | stats list(time) as Time list(itemId) as "Item" list(REQID) as X-REQUEST-ID list(price) as "Original Price" list(amount) as "Test Price" list(discount) as "Dollar Discount" list(percent) as "Percent Override" by _time | join X-REQUEST-ID [search index=test "UserId=" | rex field=_raw UserId=(?<userId>.*?)# | dedup userId | rex field=_raw X-REQUEST-ID=(?<REQID>.*?)\s | stats list(userId) as "User ID" list(REQID) as X-REQUEST-ID by _time] I have another 2 queries which looks like: Query2: search index=test "Remove Completed for" | rex field=_raw UserId=(?<userId>.*?)# | rex field=_raw X-REQUEST-ID=(?<REQID>.*?)\s | stats list(userId) as "User ID" list(REQID) as X-REQUEST-ID by _time Query3: search index=test "Clear Completed for" | rex field=_raw UserId=(?<userId>.*?)# | rex field=_raw X-REQUEST-ID=(?<REQID>.*?)\s | stats list(userId) as "User ID" list(REQID) as X-REQUEST-ID by _time I want to exclude the results from Query1 that are matching the results from Query2 and Query3 based on column "User Id" which is present in all three queries. How can i do that?
Hi All,  I've a search which has multiple columns, I would like to setup an alert If field A values are less than 10% of field B Here are my values _time          field A    field B 11:00        ... See more...
Hi All,  I've a search which has multiple columns, I would like to setup an alert If field A values are less than 10% of field B Here are my values _time          field A    field B 11:00          100        120 11:15           200        130 11:30           300         450 11:45           400         450
I'm trying to work with the aws:description events to track changes to security groups.  The events are in a nested JSON format and there can be arbitrary numbers of to/from port combinations as well... See more...
I'm trying to work with the aws:description events to track changes to security groups.  The events are in a nested JSON format and there can be arbitrary numbers of to/from port combinations as well as the number of subnets for each to/from combination.  The JSON looks like: "rules": [[{"from_port": "80", "ip_protocol": "tcp", "to_port": "80",  "grants": [{"name": null, "group_id": null, "owner_id": null, "cidr_ip": "10.0.0.0/24"},{"name": null, "group_id": null, "owner_id": null, "cidr_ip": "10.0.1.0/24"}], "groups": ""}]] If I do: | spath rules{}{}.from_port output=from_port  | eval from_port_count=mvcount(from_port) | eval from_port_count=from_port_count-1 That would put all of the from_port into a multivalued field, count the values in that field, and then subtract one so I have the range of index values in that multivalued field. I would then need (I think) some kind of "foreach" command that would iterate through rule{}{0}, rule{}{1}, rule{}{2}... to apply the same logic for finding the index range as above to the rule{}{}.grants{}.cidr_ip field. Can the foreach command be used in the subsearch of another foreach command? I can use mvzip to stitch the data back together.  Ultimately, I'm looking for an output of something like: from_port|to_port|ip_protocol|cidr_ip 80|80|tcp|10.0.0.0/24,10.0.1.0/24 I just don't know how to iterate through two levels of nested JSON where each level contains an arbitrary number of objects. Somebody's going to earn their SplunkTrust badge on this one...
HI, I have two searches per below index=* host=* source=* | eval TopicName=split(topicName,".") | chart sum(size) as Todays_Count over TopicName | sort Todays_Count | append [ index=* host=* s... See more...
HI, I have two searches per below index=* host=* source=* | eval TopicName=split(topicName,".") | chart sum(size) as Todays_Count over TopicName | sort Todays_Count | append [ index=* host=* source=* | stats count by propertiesTopicName, expectedCount | table propertiesTopicName expectedCount | sort expectedCount ] Here I am using append which basically appends the result of 2nd search below the first search. I am looking to have it displayed besides one another based on the common field values of "TopicName" and "propertiesTopicName" Basically end result I want is 3 columns being propertiesTopicName | Todays_Count | expectedCount How can I achieve this ? Please help
I have events consisting of a msg field with data like below: dev.scurry.com - [2020-01-05T19:08:10.7658789Z] "PUT /case-pst/msging/eith/TNT/2020-01-05/blob/tenthouse.txt HTTP/1.1" 200 0 61814 "-" "... See more...
I have events consisting of a msg field with data like below: dev.scurry.com - [2020-01-05T19:08:10.7658789Z] "PUT /case-pst/msging/eith/TNT/2020-01-05/blob/tenthouse.txt HTTP/1.1" 200 0 61814 "-" "-" "182.236.164.11:25412" "182.236.0.0:12001" x_forwarded_for:"182.236.164.11, 182.236.164.11" x_forwarded_proto:"https" vcap_request_id:"asdf098a0-j453-8asdf8-876-8907asdf9087" response_time:0.079841 gorouter_time:0.000152 app_id:"asdf098a0-j453-8asdf8-876-8907asdf9087" app_index:"0" x_cf_routererror:"-" x_b3_traceid:"asdf098as78987" x_b3_spanid:"7a7sad9898d90khjk" x_b3_parentspanid:"-" q3:"7asdf987987asdf9-78968asfdklj" I need to extract the http status code and only for PUT requests. Am new to regex. Any advise or pointers. Thanks.
Hello all, I need to delete duplicated events, since one of my data sources sends duplicated events, there is a field "id" and also a field "version" so I can identify the last one in order to keep ... See more...
Hello all, I need to delete duplicated events, since one of my data sources sends duplicated events, there is a field "id" and also a field "version" so I can identify the last one in order to keep it and delete the others. I need this process to run automatically every hour for example. Any suggestions? Thanks in advance
I am trying to extract a field(json array having objects) from events, now I would like to extract few more fields from that json array [ { "name": "a", "age": "19", "date_populated": "02/20/201... See more...
I am trying to extract a field(json array having objects) from events, now I would like to extract few more fields from that json array [ { "name": "a", "age": "19", "date_populated": "02/20/2019" }, { "name": "b", "age": "23", "date_populated": "02/25/2019" } ]   can you please let me know how I can get a list of names
Does the "Splunk Add-on for AWS" have the ability to delete the files it ingests from a S3 bucket (after ingesting into Splunk)?