All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a splunk clustered environment, where the License Manager has a none existent(cannot be resolved/name-lookup) servername configured (etc/system/local/server.conf -> serverName). This has been ... See more...
I have a splunk clustered environment, where the License Manager has a none existent(cannot be resolved/name-lookup) servername configured (etc/system/local/server.conf -> serverName). This has been running like this for a some time. But this is introducing issues with license monitoring in Montioring Console. To eliminate this issue and make this Splunk instance to comply to other existing instances, i tried to simply change the serverName in server.conf to the hostname and restarting the Splunk service. Splunk service is starting without complains, but the Monitoring Console reports that suddenly all the SearchHeads are unreachable. Querying the Searchheads for shcluster-status, results in errors. Reverting back to the old name and restarting, fixes that SearchHead unreachable issue and status. This License Manager server has following roles:  * License manager * (Monitoring Console) * Manager Node I do not see any connection on why this change is affecting Searchheads. Indexers are fine. Deployer is a different server. I found documented issues (for this kind of change) for Indexers and the Monitoring Console itself or that it can have side affects for the Deployment Server, but no real hit on Searchheads/SHC. As i do not have permanent access to this instance. I have to prepare kind of a remediation plan or at least analysis. I'm searching for hints where I can start with my investigation. Maybe someone had successfully changed a License Master name. Hoping that I'm missing something obvious. Thanks
Hi @elcuchi , Ok, I use basic auth instead of OAUTH so different scenario, OAUTH was not available on our first tested TA versions and we never moved away from it (which I should prioritize now). Di... See more...
Hi @elcuchi , Ok, I use basic auth instead of OAUTH so different scenario, OAUTH was not available on our first tested TA versions and we never moved away from it (which I should prioritize now). Did you test basic or that is not an option? Thing is, for basic auth: Whenever you configure the ServiceNow account in the TA, you'll have to pass that account as parameter for the ServiceNow action commands OR reference it in the alert action (it is the first field it asks you to fulfill). That is the account the TA will use to open the REST connection with ServiceNow and push the data there (either event or incident). AFAIK, there is no configuration on the TA that uses the actual Splunk logged in user in the authentication context to ServiceNow to trigger those actions. Behind the scenes, every communication is done via the account configured in the TA, at least this is how it works for me while using this TA for the past 4-5 years.   So, question: How are you testing this? (Based in your "when we test the creation of an incident from splunk interface" statement) For OAUTH it may be different, but according to the documentation I don't think it actually is. Documentation says that Oauth requires UI access to SNOW instance, which you mentioned you don't have: OAuth Authentication configuration requires UI access to your ServiceNow Instance. User roles that do not have UI access will not be able to configure their ServiceNow account to use OAuth.   If this is using the person logged in to access ServiceNow instead of using whatever OAUTH config, it makes no sense for the TA to ask clientID and clienteSecret as the main purpose for those is to authenticate.
The output of the values and list functions are always in lexicographical order.  That destroys any relationship that might exist between/among fields. The solution is to combine related fields into... See more...
The output of the values and list functions are always in lexicographical order.  That destroys any relationship that might exist between/among fields. The solution is to combine related fields into a single field before stats and then break them apart again afterwards. | eval tuple = mvzip(keyword, doc_no) | stats values(tuple) as tuple by token_id | eval pairs = split(tuple, ",") | eval keyword = mvindex(pairs,0), doc_no = mvindex(pairs, 1) | fields - tuple, pairs  
Fields value of 2nd and 3rd events are enter changing. please suggest how to maintain order in Splunk status command. I can't use any other fields in stats by clause than token_id.   Sample Event: ... See more...
Fields value of 2nd and 3rd events are enter changing. please suggest how to maintain order in Splunk status command. I can't use any other fields in stats by clause than token_id.   Sample Event: |makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2931"|eval key=2931|eval keyword="DK-BAL-AP-00613" |append [| makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2932"|eval key=2932|eval keyword="DK-Z13-SW-00002"] |append [| makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2933"|eval key=2933|eval keyword="DK-BAL-AP-00847"] | stats values(key) as key values(keyword) as keyword values(doc_no) as doc_no by token_id | eval row=mvrange(0,mvcount(doc_no))| mvexpand row| foreach doc_no keyword key [| eval <<FIELD>>=mvindex(<<FIELD>>,row)]|fields - row Search Result output toke_id key keyword doc_no c75136c4-bdbc-439b 2931 DK-BAL-AP-00613 GSSAGGOS_QA-2931 c75136c4-bdbc-439b 2932 DK-BAL-AP-00847 GSSAGGOS_QA-2932 c75136c4-bdbc-439b 2933 DK-Z13-SW-00002 GSSAGGOS_QA-2933         Expected Output toke_id key keyword doc_no c75136c4-bdbc-439b 2931 DK-BAL-AP-00613 GSSAGGOS_QA-2931 c75136c4-bdbc-439b 2932 DK-Z13-SW-00002 GSSAGGOS_QA-2932 c75136c4-bdbc-439b 2933 DK-BAL-AP-00847 GSSAGGOS_QA-2933
OK. You should have entries higher up regarding your wildcarded entries. They will be shown under Monitored directories. And inputstatus should show you the files with their status (where the input... See more...
OK. You should have entries higher up regarding your wildcarded entries. They will be shown under Monitored directories. And inputstatus should show you the files with their status (where the input is or why are not ingested). On linux you might just do | grep -C 10 BaptoEvents to limit the output dump only to relevant entries but since you're on windows, you have to use your PS-fu or cmd-fu.
@nieminej  I'm uncertain about this, please open a Splunk support ticket to investigate the issue further.
@cbiraris  In Splunk, retention policies are set at the index level, not at the sourcetype level. This means that all sourcetypes within a single index (like your xyz index) will inherit the same re... See more...
@cbiraris  In Splunk, retention policies are set at the index level, not at the sourcetype level. This means that all sourcetypes within a single index (like your xyz index) will inherit the same retention period 4 months in your case. Unfortunately, there’s no native way to assign different retention periods to individual sourcetypes within the same index.
@PickleRick  I have executed the command but nothing is visible relevant to my required starnza. FYI to you my current inputs setting. [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\00000... See more...
@PickleRick  I have executed the command but nothing is visible relevant to my required starnza. FYI to you my current inputs setting. [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\000000000*-*-SZC.VIT.BaptoEvents.*] whitelist = \.csv$ disabled = false index = Bapto initCrcLength = 256 sourcetype = SZC_BaptoEvent props.conf: [SZC_BaptoEvent] SHOULD_LINEMERGE = false #CHARSET = ISO-8859-1 TIME_FORMAT = %Y-%m-%d %H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD = 23 TRANSFORMS-drop_header = remove_csv_header TZ = UTC transforms.conf [remove_csv_header] REGEX = ^Timestamp;AlarmId;SenderType;SenderId;Severity;CreationTime;ComplexEventType;ExtraInfo DEST_KEY = queue FORMAT = nullQueue Sample of csv files to be monitor: Timestamp;AlarmId;SenderType;SenderId;Severity;CreationTime;ComplexEventType;ExtraInfo 2025-03-27 12:40:12.152;1526;Mpg;Shuttle_115;Information;2025-03-27 12:40:12.152;TetrisPlanningDelay;TetrisId: TetrisReservation_16_260544_bqixLeVr,ShuttleId: Shuttle_115,FirstDelaySection: A24.16,FirstSection: A8.16,LastSection: A24.16 2025-03-27 12:40:12.152;1526;Mpg;Shuttle_115;Unknown;2025-03-27 12:40:12.152;TetrisPlanningDelay; 2025-03-27 12:40:14.074;0;Shuttle;Shuttle_027;Unknown;2025-03-27 12:40:14.074;NoError; 2025-03-27 12:40:16.056;0;Shuttle;Shuttle_051;Unknown;2025-03-27 12:40:16.056;NoError; 2025-03-27 12:40:30.076;0;Shuttle;Shuttle_119;Unknown;2025-03-27 12:40:30.076;NoError;
As others already pointed out - no. So you've just hit one of the main reasons for splitting data into indexes. There are two main factors when deciding whether you want the data in single index or m... See more...
As others already pointed out - no. So you've just hit one of the main reasons for splitting data into indexes. There are two main factors when deciding whether you want the data in single index or multiple ones: 1) Data retention settings (and that's your case) 2) Access control Both of those work at index level. There are some other things which might come into play in some border cases (like not mixing high-volume and low-volume data in a single index) but you get much less often that deeply into  data architecture.
splunk list inputstatus splunk list monitor What do these two have to say? Since you're ingesting csv files which have fixed headers there's a fat chance crcs match and files are  ot ingested be... See more...
splunk list inputstatus splunk list monitor What do these two have to say? Since you're ingesting csv files which have fixed headers there's a fat chance crcs match and files are  ot ingested because are treated as already seen. Might want to increase initCrcLength (or fiddle with crcSalt but that's the last resort).
So, the "Company Code" problem is solved, but now you have another problem? Please share more specifics?
@gcusello Yes, I have tries but nothing works.
Hi @vsommer I have tried your suggested one but still no luck found. [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\] whitelist = \.csv$  
Hello, I want to configure alert when queue is full. We have Max Que depth and current queue depth metrics.  Problem is there are 100 queues, and each queue is having different max value so I can't... See more...
Hello, I want to configure alert when queue is full. We have Max Que depth and current queue depth metrics.  Problem is there are 100 queues, and each queue is having different max value so I can't use * for calculating %. I don't want 100 health rules. * Is not allowed in metric expression. Is there any way to setup such alert? AppDynamics   
result is coming but the ones with similar names are not coming . where in  dns field similar fields are not coming.
By default it's supposed to be simple mode. But (and that's a big but), AOB might default to XML (and might not even be able to do it differently). You can check it like this (an example from my hom... See more...
By default it's supposed to be simple mode. But (and that's a big but), AOB might default to XML (and might not even be able to do it differently). You can check it like this (an example from my home lab): # /opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/TA-api-test/test_input_1.py --scheme <scheme> <title>test_input_1</title> <description>Go to the add-on's configuration UI and configure modular inputs under the Inputs menu.</description> <use_external_validation>true</use_external_validation> <streaming_mode>xml</streaming_mode> <use_single_instance>false</use_single_instance> <endpoint> <args> <arg name="name"> <title>test_input_1 Data Input Name</title> </arg> <arg name="placeholder"> <title>placeholder</title> <required_on_create>0</required_on_create> <required_on_edit>0</required_on_edit> </arg> </args> </endpoint> </scheme> As you can see - it's XML mode. And I'm not sure you can change that. At least I didn't see any option in AOB to change that. You might be able to fiddle with the input definition in AOB to see if it can explicitly break the REST results into separate events.
Sorry, try with double quotes around "Company Code" in the values function | stats values("Company Code") as "Company Code" by timeval ip dns "Operation System" severity pluginname timeval Scan-Loca... See more...
Sorry, try with double quotes around "Company Code" in the values function | stats values("Company Code") as "Company Code" by timeval ip dns "Operation System" severity pluginname timeval Scan-Location is_solved blacklisted
After running the search the "Company Code " field is empty
| inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW Asset_Gruppe Scan-Company Scann... See more...
| inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW Asset_Gruppe Scan-Company Scanner Scan-Location Location "DNS Name" as dns_name Betriebssystem as "Operation System" | lookup lkp-GlobalIpRange.csv 3-Letter-Code as Location OUTPUTNEW "Company Code" | eval is_solved=if(lastchecked>lastfound OR lastchecked == 1,1,0),blacklisted=if(isnull(blfinding),0,1),timeval=strftime(lastchecked,"%Y-%m-%d") | fillnull value="NA" "Company Code", Scan-Location | search is_solved=0 blacklisted=0 Scan-Location="*" "Company Code"="*" severity="high" | stats values("Company Code") as "Company Code" by timeval ip dns "Operation System" severity pluginname timeval Scan-Location is_solved blacklisted | fields "Company Code" timeval ip dns "Operation System" severity pluginname timeval Scan-Location is_solved blacklisted | sort severity
Hi @uagraw01, you can also change your stanza to this: [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\] whitelist = \.csv$   Hope this helps you.