All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Put my additional SPL - AFTER your original search - you've added it in the middle
I am not sure I understood the additional dimension - too many numbers for my head, 8 windows, 6 samples, 5 samples... so I got lost. However, if this helps, in the first search, the range bands wer... See more...
I am not sure I understood the additional dimension - too many numbers for my head, 8 windows, 6 samples, 5 samples... so I got lost. However, if this helps, in the first search, the range bands were simply defined as the age and then a fixed 14400 window. If you want to change the window as well, then you can use another array, i.e. | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | eval window=split("1800,3600,5400,7200,14400,14400,14400,14400",",") ... ``` Band calculation ``` | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, s=tonumber(mvindex(window, <<FIELD>>)), zone=if(age < s + r AND age > r, <<FIELD>>, null()), range=mvappend(range, zone) ] but again, not sure I understood the requirement
Yeah, I am a bit confuse as well. Seems like the last part of the query "| sort - count | head 10" does not really do anything.  So I've modified my search to be like: type="request" "request.path"... See more...
Yeah, I am a bit confuse as well. Seems like the last part of the query "| sort - count | head 10" does not really do anything.  So I've modified my search to be like: type="request" "request.path"="prod/" | stats count by account_namespace | eval namespace="" | xyseries namespace account_namespace count | sort - count | head 10 by using the above, it gives me a result where the account_namespace shows as a column with all the count as the value. In the column it is showing all of it and not only the top 10 that has highest count. 
Hi there, I have this query below to search the top policies that has been used.    type="request" "request.path"="prod/" | stats count by policies{} | sort -count | head 10   by default all th... See more...
Hi there, I have this query below to search the top policies that has been used.    type="request" "request.path"="prod/" | stats count by policies{} | sort -count | head 10   by default all the policies is being generated with "default" which I wanted to get rid of when searching so properly shows the top 10 policies only.  The search query above example results are:   policies: default policies_1 policies_2 policies_3 ....   I wanted to get rid of the default showing on my result. Any idea or help is really appricated. 
Your original search will already limit the top 10, as you are doing sort+head, so not sure I understand how you are getting all results?
Thanks @bowesmana for your reply and sharing the below. I have now managed to make it multiseries chart by applying you've shared below. However, it is showing the result of all of the account_namesp... See more...
Thanks @bowesmana for your reply and sharing the below. I have now managed to make it multiseries chart by applying you've shared below. However, it is showing the result of all of the account_namespace, is there a way for me to filter the highest 10 count and only shows that?
below are the logs. source=gettomcat 240628 05:59:41 6677 gettomcat: ===> TRN@q1: abbb-acabca-adste112 [Priority=Medium]. 240628 05:59:41 6677 gettomcat: <--- TRN: abbb-acabca-adste112 - S. sourc... See more...
below are the logs. source=gettomcat 240628 05:59:41 6677 gettomcat: ===> TRN@q1: abbb-acabca-adste112 [Priority=Medium]. 240628 05:59:41 6677 gettomcat: <--- TRN: abbb-acabca-adste112 - S. source=puttomcat 240628 05:59:58 32296 puttomcat: ---> TRN: abbb-acabca-adste112 - Done. From the gettomcat source extracted the priority and time, from the puttomcat i have extracted the time, i did a calculation to see the round trip of a request and also percentage of passed request. Now i need to disable as per the below screenshot. Priority, percentage of each prioirty per day for the last 7 days.    
Thanks for all the little cleanup suggestions.  They were something I was going to get to after I got the first iteration working. I am going to put them into my notes for later... I have incorpo... See more...
Thanks for all the little cleanup suggestions.  They were something I was going to get to after I got the first iteration working. I am going to put them into my notes for later... I have incorporated them along with getting rid of the appendcols (I was aware of a single search looking for both strings, then doing a if / case / match to determine the 'type' of event). I was not aware of the 'fix' to the range index # but like the adding of 1 to get rid of 0 indexing. Then, what I wanted was the % of the # of A / B, so at the end: | eval percentage{range} = round(A / B) * 100) | stats values(percentage*) AS percentage*   All tested for the first scenario.  I am working on one more enhancement where I have the same 8 'windows' but have 6 different sampling that range for 5 different 4 hour samples but using decreasing window sizes. (4Hr, 2Hr, 1.5Hr, 1Hr, 30Min)  then the last is still a 4 Hr sample but staring with a 5 min window then a 10Min, 15Min, 30Min, 60Min, 90Min, 120Min, 240Min window. Thanks for the help and suggestions !
Apart from the problems already identified by @PickleRick you should do as advised and provide anonymised representative examples of your events and a description of what it is you are trying to do, ... See more...
Apart from the problems already identified by @PickleRick you should do as advised and provide anonymised representative examples of your events and a description of what it is you are trying to do, because your current approach does not look very performant or even workable. If you want daily statistics, you should include some sort of time factor in your by clause | bin _time span=1d | stats values(*) as * by _time, TRN
Thanks PickleRick, I get each field details from different sources, my bad I updated the same source for all the searches, actually those are different.
License requirements depend on your licensing model. With an ingest-based license you only need a license based of the amount of data you're ingesting. It doesn't matter what your architecture is - ... See more...
License requirements depend on your licensing model. With an ingest-based license you only need a license based of the amount of data you're ingesting. It doesn't matter what your architecture is - you can do everything on an all-in-one setup (as long as it has enough capacity) or you can have a multisite cluster with dozens of indexers and search heads - all within the same license. With a resource-based licensing your environment is licensed by processing capacity so more indexers-bigger license you need. As this is not your first architecture/licensing-related question I will again advise you to consult your business needs with your local Splunk Partner - there might be other issues worth considering before choosing the final architecture. Unless you're working for your local Splunk Partner wannabe - in such case ask your employer to invest in Architecting Splunk Environments training for you.
The docs on the streamstats command say that "all accumulated statistics" are reset on reset_* options. That would imply that the reset is global, not on a per "by-field(s)" basis. It could call for... See more...
The docs on the streamstats command say that "all accumulated statistics" are reset on reset_* options. That would imply that the reset is global, not on a per "by-field(s)" basis. It could call for docs feedback to make it more explicitly stated. The practical solution to this you already got from @ITWhisperer
It's not a very good search to begin with (unneeded multisearch and wildcard-beginning search terms) so maybe show a sample (anonymized if needed) of your data and a description of what you need to g... See more...
It's not a very good search to begin with (unneeded multisearch and wildcard-beginning search terms) so maybe show a sample (anonymized if needed) of your data and a description of what you need to get from it. That might be easier than "fixing" this one.
As @tscroggins said - Splunk clusters are not active-passive setups. One could think of some duct-tape setups with limiting network connectivity to certain times of day but that would make the cluste... See more...
As @tscroggins said - Splunk clusters are not active-passive setups. One could think of some duct-tape setups with limiting network connectivity to certain times of day but that would make the cluster as a whole appear severely degraded. You could think of a "outside Splunk" replication of servers' state but that's tricky and not really supported. If you have some specific business needs, consult them with either Splunk Presales team or your friendly local Splunk Partner,
I need to display priority data for 7 days with the percentage, however am unable to display it in 7 days. My below query works for a days search but doesn't displays for 7 days. Could you please hel... See more...
I need to display priority data for 7 days with the percentage, however am unable to display it in 7 days. My below query works for a days search but doesn't displays for 7 days. Could you please help with fixing the query. Below is my query. | multisearch [ search index=myindex source=mysoruce "* from *" earliest=-7d@d latest=@d | fields TRN, tomcatget, Queue ] [ search index=myindex source=mysoruce *sent* earliest=-7d@d latest=@d | fields TRN, TimeMQPut, Status] [ search index=myindex source=mysoruce *Priority* earliest=-7d@d latest=@d | fields TRN,Priority ] | stats values(*) as * by TRN | eval PPut=strptime(tomcatput, "%y%m%d %H:%M:%S") | eval PGet=strptime(tomcatget,"%y%m%d %H:%M:%S") | eval tomcatGet2tomcatPut=round((PPut-PGet),0) | fillnull value="No_tomcatPut_Time" tomcatput | fillnull value="No_tomcatGet_Time" tomcatget | table TRN, Queue, BackEndID, Status, Priority, tomcatget, tomcatput, tomcatGet2tomcatPut | eval E2E_5min=if(tomcatGet2tomcatPut<=300,1,0) | eval E2E_20min=if(tomcatGet2tomcatPut>300 and tomcatGet2tomcatPut<=1200,1,0) | eval E2E_50min=if(tomcatGet2tomcatPut>1200 and tomcatGet2tomcatPut<=3000,1,0) | eval E2EGT50min=if(tomcatGet2tomcatPut>3000,1,0) | eval Total = E2E_5min + E2E_20min + E2E_50min + E2EGT50min | stats sum(E2E_5min) as sum_5min sum(E2E_20min) as sum_20min sum(E2E_50min) as sum_50min sum(E2EGT50min) as sum_50GTmin sum(Total) as sum_total by Priority | eval bad = if(Priority="High", sum_20min + sum_50min + sum_50GTmin, if(Priority="Medium", sum_50min + sum_50GTmin, if(Priority="Low", sum_50GTmin, null()))) | eval good = if(Priority="High", sum_5min, if(Priority="Medium", sum_5min + sum_20min, if(Priority="Low", sum_5min+ sum_20min + sum_50min, null()))) | eval per_cal = if(Priority="High", (good / sum_total) * 100, if(Priority="Medium", (good / sum_total) * 100, if(Priority="Low", (good / sum_total) * 100, null()))) | table Priority per_cal looking to get output in below format.  
Apart from the direct technical answer - you can't have two same settings (two FORMAT entries) in the same stanza. The latter overwrittes the former. But there are more issues here - why are you try... See more...
Apart from the direct technical answer - you can't have two same settings (two FORMAT entries) in the same stanza. The latter overwrittes the former. But there are more issues here - why are you trying to use index-time extractions in the first place?
Connection timeout means that your end tried to establish connection with the destinations server (api.securityserver.microsoft.com) but didn't get any response. This typically means network-level pr... See more...
Connection timeout means that your end tried to establish connection with the destinations server (api.securityserver.microsoft.com) but didn't get any response. This typically means network-level problems (like lack of proper firewall rules allowing outgoing traffic) or (actually it's the same thing but pushed one step further) not configured proxy server when direct outgoing traffic is forbidden.
Hi @MichaelBs, If you're using Curl search, the command should automatically convert a body containing an array/list into separate events. The RIPEstat Looking Glass API returns a single object and ... See more...
Hi @MichaelBs, If you're using Curl search, the command should automatically convert a body containing an array/list into separate events. The RIPEstat Looking Glass API returns a single object and multiple rrcs items in the data field:   | curl url="https://stat.ripe.net/data/looking-glass/data.json?resource=1.1.1.1" { "messages": [ [ "info", "IP address (1.1.1.1) has been converted to its encompassing routed prefix (1.1.1.0/24)" ] ], "see_also": [], "version": "2.1", "data_call_name": "looking-glass", "data_call_status": "supported", "cached": false, "data": { "rrcs": [ ... ], "query_time": "2024-06-30T17:24:44", "latest_time": "2024-06-30T17:24:29", "parameters": { "resource": "1.1.1.0/24", "look_back_limit": 86400, "cache": null } }, "query_id": "20240630172444-e3bf9bf6-dd38-4cff-aa4b-e78b33f1a2c3", "process_time": 70, "server_id": "app111", "build_version": "live.2024.6.24.207", "status": "ok", "status_code": 200, "time": "2024-06-30T17:24:44.525141" }   You return rrcs items as individual events with various combinations of spath, mvexpand, eval, etc.:   | fields data | spath input=data path="rrcs{}" output=rrcs | fields rrcs | mvexpand rrcs | eval rrc=spath(rrcs, "rrc"), location=spath(rrcs, "location"), peers=spath(rrcs, "peers{}") | fields rrc location peers | mvexpand peers | spath input=peers | fields - peers   For experimentation, I recommend storing the data in a lookup file to limit the number of calls you make to stat.ripe.net. First search:   | curl url="https://stat.ripe.net/data/looking-glass/data.json?resource=1.1.1.1" | outputlookup ripenet_looking_glass.csv   Subsequent searches:   | inputlookup ripenet_looking_glass.csv | fields data ``` ... ```    
Hi folks, I am trying to get Defender logs into the  Splunk Add-On for Microsoft Security but I am struggling a bit. It "appears" to be configured correctly but I am seeing this error in the logs: ... See more...
Hi folks, I am trying to get Defender logs into the  Splunk Add-On for Microsoft Security but I am struggling a bit. It "appears" to be configured correctly but I am seeing this error in the logs: ERROR pid=222717 tid=MainThread file=ms_security_utils.py:get_atp_alerts_odata:261 | Exception occurred while getting data using access token : HTTPSConnectionPool(host=&#x27;api.securitycenter.microsoft.com&#x27;, port=443): Max retries exceeded with url: /api/alerts?$expand=evidence&amp;$filter=lastUpdateTime+gt+2024-05-22T12:34:35Z (Caused by ConnectTimeoutError(&lt;urllib3.connection.HTTPSConnection object at 0x7fe514fa1bd0&gt;, &#x27;Connection to api.securitycenter.microsoft.com timed out. (connect timeout=60)&#x27;)) Is this an issue with the way the Azure Connector App is permissioned or something else entirely? Thanks in advance
I Have used the below two events to test the SOURCE_KEY =     <132>1 2023-12-24T09:48:05+00:00 DCSECIDKOASV02 ikeyserver 8244 - [meta sequenceId="2850227"] {Warning}, {RADIUS}, {W-006001}, {An i... See more...
I Have used the below two events to test the SOURCE_KEY =     <132>1 2023-12-24T09:48:05+00:00 DCSECIDKOASV02 ikeyserver 8244 - [meta sequenceId="2850227"] {Warning}, {RADIUS}, {W-006001}, {An invalid RADIUS packet has been received.}, {0x0C744774DF59FC530462C92D2781B102}, {Source Location:10.240.86.6:1812 (Authentication)}, {Client Location:10.240.86.18:42923}, {Reason:The packet is smaller than minimum size allowed for RADIUS}, {Request ID:101}, {Input Details:0x64656661756C742073656E6420737472696E67}, {Request Type:Indeterminate} <132>1 2023-12-24T09:48:05+00:00 DCSECIDKOASV02 ikeyserver 8244 - [meta sequenceId="2850228"] {Warning}, {RADIUS}, {W-006001}, {An invalid RADIUS packet has been received.}, {0xBA42228CB3604ECFDEEBC274D3312187}, {Source Location:10.240.86.6:1812 (Authentication)}, {Client Location:10.240.86.19:18721}, {Reason:The packet is smaller than minimum size allowed for RADIUS}, {Request ID:101}, {Input Details:0x64656661756C742073656E6420737472696E67}, {Request Type:Indeterminate}   Using the below Regex: [xmlExtractionIDX] REGEX = .*?"]\s+\{(?<Severity>\w+)\},\s+\{\w+\},\s+\{(?<DeviceID>[^}]*)\},(.*) FORMAT = Severity::$1 DeviceID::$2 Last_Part::$3 WRITE_META = true   till that it's working fine then i want to add more precise extraction and want to extarct more info from the Last_Part field using the SOURCE_KEY =    [xmlExtractionIDX] REGEX = .*?"]\s+\{(?<Severity>\w+)\},\s+\{\w+\},\s+\{(?<DeviceID>[^}]*)\},(.*) FORMAT = Severity::$1 DeviceID::$2 Last_Part::$3 SOURCE_KEY = MetaData:Last_Part REGEX = Reason:(.*?)\} FORMAT = Reason::$1 WRITE_META = true   But it doesn't work now, Is there any advice to do that using SOURCE_KEY