All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @jrowland1230 , I tried ingesting the sample data in my environment and the following SPL works. source="community_question.txt" host="my_host" sourcetype="jrowland_json" | spath path=log out... See more...
Hello @jrowland1230 , I tried ingesting the sample data in my environment and the following SPL works. source="community_question.txt" host="my_host" sourcetype="jrowland_json" | spath path=log output=log | spath input=log path=content output=content | rex field=content "status\":\"(?<status_extract>\w+)\"" | table status_extract | eval Service=case(status_extract="CANCELLED","Cancelled",status_extract="BAY","BAY",true(),"NULL") Please refer to the following screenshot as well:   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.
Hello @mahesh27, Ideally attribute.app.servicecode should populate the dropdown. However, for simplified usage, you can use either rename or spath command to give a simple field name for use in the ... See more...
Hello @mahesh27, Ideally attribute.app.servicecode should populate the dropdown. However, for simplified usage, you can use either rename or spath command to give a simple field name for use in the dashboard. Your XML input should look something like below: <fieldset submitButton="false"> <input type="dropdown" token="service_code_token"> <label>Service Code</label> <fieldForLabel>service_code</fieldForLabel> <fieldForValue>service_code</fieldForValue> <search> <query>index=application-idx |rename "attribute.app.servicecode" as service_code |stats count by service_code | fields service_code</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> </fieldset> After setting the input, to use the selected service_code value in subsequent panels, you can mention  | search service_code="$service_code_token$" OR | search "attribute.app.servicecode"="$service_code_token$"   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.
Hello @cking,  You can follow the following steps for migrating the buckets from one smartstore location to the other. - Enable maintenance mode on the cluster-manager - Stop splunk on the indexer... See more...
Hello @cking,  You can follow the following steps for migrating the buckets from one smartstore location to the other. - Enable maintenance mode on the cluster-manager - Stop splunk on the indexer peers using splunk stop command. - Ensure that all the hot buckets have been rolled to warm state - Move all the buckets from current location to the desired one - Change the associated parameters in indexes.conf - Push the bundle to all the indexers - Verify that the data is searchable and connected to the new storage. You can refer to the following documentation link for migration steps - https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/MigratetoSmartStore. Although, the document mentions migration from non smartstore to smartstore index, similar steps can be used for migration to different S3 location.  PS> This activity should be performed via Splunk PS due to complex nature of the task. Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.
Sorry,,   And, I'm not getting the following message   Error in 'lookup' command: Cannot find the source field 'xxx' in the lookup table 'lookup.csv'.
And, I'm not getting the following message
Please tell me about the lookup operation. 1. when you register a new lookup table file (CSV) from the GUI, you can immediately refer to it on the search screen.    | inputlookup “lookup.csv”   ... See more...
Please tell me about the lookup operation. 1. when you register a new lookup table file (CSV) from the GUI, you can immediately refer to it on the search screen.    | inputlookup “lookup.csv”   However, it does not appear in the list of files in the “Lookup File” pull-down on the next Create New Lookup Definition screen. It takes time to set up because it appears after more than one day each time. Is this due to a limitation caused by the specifications? If you know the cause, please let us know.    2. no lookup The following CSV file is registered, and lookup definitions and automatic definitions are also set. 【lookup.csv】   PC_Name | Status | MacAddr1 | MacAddr2 ------------------------------------------------------------ PC_Name1 | Used | aa:bb:cc... | zz:yy:xx... PC_Name2 | Used | aa:bb:cc... | zz:yy:xx... PC_Name3 | Used | aa:bb:cc... | zz:yy:xx...   *MacAddr1 and MacAddr2 by Ethernet and WiFi Address, I want to refer to MacAddr2 as a key. The following fields are output in the target index log CL_MacAddr as defined in the calculated field I would like to reference the Mac address of this CL_MacAddr from lookup.csv and output PC_Name and Status as fields, but it is not working. For example, when I enter the following in the search screen, only the existing fields appear, not PC_Name, Status, etc. index=“nc-wlx402” sourcetype=“NC-WIFI-3” | lookup “lookup.csv” MACAddr2 AS CL_MacAddr OutputNew   However, another lookup definition is available for the same index and source type (automatic definition setting, confirmed operation). I'm assuming this is due to something basic... please help me
index="xyz" sourcetype = abc" | search Country="ggg" statusCode=200 | stats count as Registration | where Registration =0 Could you please help me to modify this query. Time period is last 24 ... See more...
index="xyz" sourcetype = abc" | search Country="ggg" statusCode=200 | stats count as Registration | where Registration =0 Could you please help me to modify this query. Time period is last 24 hours. 
Yes that's the actual WARN message, the worst I've seen is a warning count of 9001 with a 150MB queue, the forwarder itself forwards a peak of over 100MB/s 05-21-2024 18:48:47.099 +1000 WARN AutoLoa... See more...
Yes that's the actual WARN message, the worst I've seen is a warning count of 9001 with a 150MB queue, the forwarder itself forwards a peak of over 100MB/s 05-21-2024 18:48:47.099 +1000 WARN AutoLoadBalancedConnectionStrategy [264180 TcpOutEloop] - Current dest host connection 10.x.x.x:9997, oneTimeClient=0, _events.size()=131822, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Tue May 21 18:48:36 2024 is using 157278423 bytes. Total tcpout queue size is 157286400. Warningcount=9001 That went from Warningcount=1 at 18:48:38.538 to Warningcount=1001 at 18:48:38.771 Then 18:48:38.90 has 2001 18:48:39.033 has 3001 18:48:39.134 has 4001 18:48:39.200 has 5001 18:48:39.336 has 6001 18:48:39.553 has 7001 18:48:46.500 has 8001 and finally: 18:48:47.099 has 9001 I suspect the backpressure is caused by an istio pod failure in K8s. I haven't tracked down the cause but I've seen some cases where the istio ingress gateways pods in K8s are in a "not ready" state, however I suspect they were alive enough to take on traffic. During this time period I will sometimes see higher than normal Warningcount= entries *and* often around the same time my website availability checks start failing to DNS names that are pointed to istio pods. My current suspect is that's it's not just a Splunk-level backpressure but I'll keep investigating (at the time the indexing tier shows the most utilised TCP input queues were at 67% using a max() measurement on their metrics.log. The vast majority of my Warningcount= entries on this forwarder show a value of 1. The configuration for this instance is: maxQueueSize = 150MB autoLBVolume = 10485760 autoLBFrequency = 1 dnsResolutionInterval = 259200 # tweaks the connectionsPerTarget = 2 * approx number of indexers connectionsPerTarget = 96 # As per NLB tuning heartbeatFrequency = 10 connectionTTL = 75 connectionTimeout = 10 autoLBFrequency = 1 maxSendQSize = 400000 # default 30 seconds, we can retry more quickly with istio as we should move to a new instance if it goes down backoffOnFailure = 5   The maxSendQSize was tuned for a much lower volume forwarder and I forgot to update it for this instance, so I will increase that, and this instance appears to have increased from 30-50MB/s to closer to 100MB/s so I'll increase the autoLBVolume setting as well
If you always want to ignore the same hosts each time, you could create a lookup file with names of the hosts and use a search as described in this post: https://community.splunk.com/t5/Splunk-Search... See more...
If you always want to ignore the same hosts each time, you could create a lookup file with names of the hosts and use a search as described in this post: https://community.splunk.com/t5/Splunk-Search/How-to-search-for-all-IP-s-not-in-a-lookup-table/m-p/371170 Something like index=myindex NOT [|inputlookup mylookup.csv | fields host]  
@billathena Hey, billathena. As you mentioned you are using Qualys TA in Splunk cloud victoria version. We are facing some issue with this TA. In our environment The problem is that when the process ... See more...
@billathena Hey, billathena. As you mentioned you are using Qualys TA in Splunk cloud victoria version. We are facing some issue with this TA. In our environment The problem is that when the process running the data input hits an error, it's not handling it or recovering.  The addon can't start the input again, because the input reports that the PID is already running. we need to manually kill the PID via TA. then it starts wokring but then again stopped after sometime. Can you please provide some input did you face this issue or not? If you can help how we can resolve this. Thanks in advance
Hi can you say a little more about what the intended field values are that you are trying to achieve?
I was finally able to resolve the issue by rewriting the entire Action logic.  This time my json data that is added to action result data is not a dictionary list inside a list. instead it is a list... See more...
I was finally able to resolve the issue by rewriting the entire Action logic.  This time my json data that is added to action result data is not a dictionary list inside a list. instead it is a list of dictionary.     
Sorry I didn't realize the string of interest was in value. (There was a big discussion about field name recently.)  If so, regex is appropriate.  This should work   | eventstats values(name) as le... See more...
Sorry I didn't realize the string of interest was in value. (There was a big discussion about field name recently.)  If so, regex is appropriate.  This should work   | eventstats values(name) as left_name | eval match_name = mvmap(left_name, mvappend(match_name, if(match(NAME, "^(.+_)*" . left_name . "_"), left_name, null()))) | eval joined = coalesce(name, match_name) | fields - *name NAME | stats values(*) as * by joined   Here I use joined instead of joined_name just to make field cleanup easier.  Here is some mock data: NAME left_data_var name right_data_var   leftbar1 RU3NDS     leftbar1 SOMETHING     leftbar2 ELSE   RU3NDS_abcd     rightbar1 RU3NDS_efgh     rightbar3 A_SOMETHING_abcd     rightbar2 SOMETHING_efgh     rightbar1 ELSE_bcde     rightbar2 A_RU3NDS_cdef     rightbar3 The result is joined left_data_var right_data_var ELSE leftbar2 rightbar2 RU3NDS leftbar1 rightbar1 rightbar3 SOMETHING leftbar1 rightbar1 rightbar2 It is really important to illustrate data and desired output when asking a data analytics question.  You could have saved everyone lots of time guessing. This is a full emulation   | makeresults format=csv data="name, left_data_var RU3NDS, leftbar1 SOMETHING, leftbar1 ELSE, leftbar2" | append [makeresults format=csv data="NAME, right_data_var RU3NDS_abcd, rightbar1 RU3NDS_efgh, rightbar3 A_SOMETHING_abcd, rightbar2 SOMETHING_efgh, rightbar1 ELSE_bcde, rightbar2 A_RU3NDS_cdef, rightbar3"] ``` data emulation above ``` | eventstats values(name) as left_name | eval match_name = mvmap(left_name, mvappend(match_name, if(match(NAME, "^(.+_)*" . left_name . "_"), left_name, null()))) | eval joined = coalesce(name, match_name) | fields - *name NAME | stats values(*) as * by joined    
Hi Team, I have an auto-extracted field - auth.policies{} I have another field called user  Whenever auth.policies{} is root, I need that to be a part of user field May I know how to do it? Is th... See more...
Hi Team, I have an auto-extracted field - auth.policies{} I have another field called user  Whenever auth.policies{} is root, I need that to be a part of user field May I know how to do it? Is there a possibility to use case and coalesce together?
Thank you @phanTom  Sure I will research, I am new to phantom, your help much appreciated. Regards, Harisha
We currently have a Splunk Enterprise cluster that uses SmartStore in AWS S3. We're looking to move the cluster to an entirely new AWS account.  However, we are not sure of the best way to move the c... See more...
We currently have a Splunk Enterprise cluster that uses SmartStore in AWS S3. We're looking to move the cluster to an entirely new AWS account.  However, we are not sure of the best way to move the contents of the SmartStore without corrupting any of the files that have been indexed.  What would be the best way to migrate from one SmartStore backend to another SmartStore backend without losing any data? 
Hi all, we have around 8 dashboards fetching data from same index.  There are around 150 hosts with this index, but we don't want to see the data from particular 50 hosts in a dashboard.  how this ca... See more...
Hi all, we have around 8 dashboards fetching data from same index.  There are around 150 hosts with this index, but we don't want to see the data from particular 50 hosts in a dashboard.  how this can be done??? Any inputs on this please???  
Logs are in JSON format and we want to get this attribute.app.servicecode field values as a drop down in classic dashboard. Query: index=application-idx  |stats count by attribute.app.servicecode H... See more...
Logs are in JSON format and we want to get this attribute.app.servicecode field values as a drop down in classic dashboard. Query: index=application-idx  |stats count by attribute.app.servicecode How to get this field values in a drop down????
Hi Sorry for the long shot.  Yes, I did find a way to make it work for a while. Just get the refresh token in and it will automatically work until that you have to tune it.