All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Choropleth map provides city level resolution, is there way to get higher resolution such as street or block level? thanks!
Hi Dear Splunkers,  I have been working on creating a Custom TA for counting unicode characters for non-eng dataset (long story discussion post in PS), getting these lookup file errors 1)Error in... See more...
Hi Dear Splunkers,  I have been working on creating a Custom TA for counting unicode characters for non-eng dataset (long story discussion post in PS), getting these lookup file errors 1)Error in 'lookup' command: Could not construct lookup 'ucd_count_chars_lookup, _raw, output, count'. See search.log for more details. 2)The lookup table 'ucd_count_chars_lookup' does not exist or is not available. The search job has failed due to an error. You may be able view the job in the Job Inspector.   The Custom TA creation steps I followed: (on my personal laptop, installed bare-min fresh 9.3.2 enterprise trial version) 1) Created the custom TA named "TA-ucd" on app creation page (given read for all, execute for owner, shared with all apps) 2) created the ucd_category_lookup.py (made sure of the permissions) $SPLUNK_HOME/etc/apps/TA-ucd/bin/ucd_category_lookup.py (this file should be readable and executable by the Splunk user, i.e. have at least mode 0500) #!/usr/bin/env python import csv import unicodedata import sys def main(): if len(sys.argv) != 3: print("Usage: python category_lookup.py [char] [category]") sys.exit(1) charfield = sys.argv[1] categoryfield = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() for result in r: if result[charfield]: result[categoryfield] = unicodedata.category(result[charfield]) w.writerow(result) main()  $SPLUNK_HOME/etc/apps/TA-ucd/default/transforms.conf [ucd_category_lookup] external_cmd = ucd_category_lookup.py char category fields_list = char, category python.version = python3 $SPLUNK_HOME/etc/apps/TA-ucd/metadata/default.meta [] access = read : [ * ], write : [ admin, power ] export = system 3) after creating the 3 files above mentioned, i did restart Splunk service and laptop as well.  4) still the search fails with lookup errors above mentioned.  5) source=*search.log* - does not produce anything (Surprisingly !) Could you pls upvote the idea - https://ideas.splunk.com/ideas/EID-I-2176 PS - long story available here - https://community.splunk.com/t5/Splunk-Search/non-english-words-length-function-not-working-as-expected/m-p/705650  
Thank you.This works perfectly. 
As @ITWhisperer points out it depends if you have a single "series" in your data, e.g. as in this example which has 4 rows of the "type" field | makeresults | eval type=split("ABCD","") | mvexpand ... See more...
As @ITWhisperer points out it depends if you have a single "series" in your data, e.g. as in this example which has 4 rows of the "type" field | makeresults | eval type=split("ABCD","") | mvexpand type | chart count by type or whether you have 4 fields and a single row as in this example,, which allow you to change the colours of the "series" - i.e. colums | makeresults | eval type=split("ABCD","") | mvexpand type | eval xx="A" | chart count over xx by type If your results are like the first example, i.e. 4 rows and a type/count, then you have options to make it the other way, but a simple option is to do | transpose 0 header_field=type after your results, where "type" is your column name
You need the eval like this values(eval(if(status>399, status, null()))) as list_of_Status otherwise the eval just returns a boolean type result, so you need to use if and assign the result. You ... See more...
You need the eval like this values(eval(if(status>399, status, null()))) as list_of_Status otherwise the eval just returns a boolean type result, so you need to use if and assign the result. You can also do it like this after the stats using mvmap | eval list_of_Status=mvfilter(list_of_Status>=399)
I don't see how leap years could have anything to do with it. A leap year has 1 more day than a regular year, so that doesn't explain why they would use 1 less day than a regular year...
Thanks a lot. This works fine. Is there a way we can display only status which are greater than 399. Like (status>399) i tried values(eval(status>399)) but it didn't work. 
Thanks a lot. This works fine. Is there a way we can display only status which are greater than 399. Like (status>399) i tried values(eval(status>399)) but it didn't work. 
Hi here are the endpoints which you must use. Select the correct one based on your SCP instance type. Configure HTTP Event Collector on Splunk Enterprise r. Ismo
Hi you told here what is your solution to your issue, but what is your issue and especially why you are sending same event to two separate clusters? That means also duplicate licenses costs. Basic... See more...
Hi you told here what is your solution to your issue, but what is your issue and especially why you are sending same event to two separate clusters? That means also duplicate licenses costs. Basically you could do this by replicating sourcetype and then removed this field from replicated sourcetype. But maybe there is better solution when we understand your real issue? r.Ismo
Presumably, you are talking about a column chart. The colours only apply to the series, so unless you have different fields with the names you provided, the columns for the series will all be the sam... See more...
Presumably, you are talking about a column chart. The colours only apply to the series, so unless you have different fields with the names you provided, the columns for the series will all be the same colour. If you could provide details of the search you are using in your chart, we might be able to help you.
I use the good old grep command when I needed a list of indexes referenced in all inputs on all folders ; like this:   splunk btool inputs list --debug | grep index  
Hi are you sure that this is the correct outputs.conf definition for your host to sending events into SCP? Usually this is named something like  100_<your splunk stack name>. You can check the rea... See more...
Hi are you sure that this is the correct outputs.conf definition for your host to sending events into SCP? Usually this is named something like  100_<your splunk stack name>. You can check the real configurations by  splunk btool outputs list tcpout --debug This shows what those configurations are and where those are defined. Basically you should use that UF configuration which you have downloaded from your SCP stack. r. Ismo 
I have created one Dashboard and trying to add different field color. I navigated to "source " >tried updating XML code as "charting.fieldColors">{"Failed Logins":"#FF9900", "NonCompliant_Keys":"#FF0... See more...
I have created one Dashboard and trying to add different field color. I navigated to "source " >tried updating XML code as "charting.fieldColors">{"Failed Logins":"#FF9900", "NonCompliant_Keys":"#FF0000", "Successful Logins":"#009900", "Provisioning Successful":"#FFFF00"</option>" but still all clumns are showing as "Purple"   Can someone help me with it?
Hosted by AWS. Yes, port 443 works.
We are looking to configure the Splunk Add-on for Microsoft Cloud Services to use a Service Principal as opposed to a client key.  The documentation for the Add-On does not provide insight into how o... See more...
We are looking to configure the Splunk Add-on for Microsoft Cloud Services to use a Service Principal as opposed to a client key.  The documentation for the Add-On does not provide insight into how one would configure the Splunk Add-on for Microsoft Cloud Services to work with a Service Principal.  Does the Splunk Add-on for Microsoft Cloud Services service principals for authentication?  
I have a heavy forwarder that sends the same event to two different indexer cluster. Now this event has a new field "X" that I only want to see in one of the indexer clusters.  I know in the props.c... See more...
I have a heavy forwarder that sends the same event to two different indexer cluster. Now this event has a new field "X" that I only want to see in one of the indexer clusters.  I know in the props.conf I can configure the sourcetype to do the removal of the field but that would be on the sourcetype level. Is there any way to remove it on one copy and not the other?  Alternatively I could do the props.conf change on the indexer level instead.
Try this query index=test | stats count(eval(status>399)) as Errors,count as Total_Requests, values(Status) as list_of_Status by consumers | eval Error_Percentage=((Errors/Total_Requests)*100)
Nit: the instance is *managed* by Splunk, but it is *hosted* by either AWS or GCP.  Contact your Splunk admin if you don't know which host you have. If you're not on a trial account then the port nu... See more...
Nit: the instance is *managed* by Splunk, but it is *hosted* by either AWS or GCP.  Contact your Splunk admin if you don't know which host you have. If you're not on a trial account then the port number will be 443. Make sure the computer you are connecting from is on your Splunk Cloud Allowed IP List.
We have new SH node which we are trying to add to the Search head cluster,  updated the configs in shcluster config and other configs.  After adding this node in the cluster , now we have two nodes ... See more...
We have new SH node which we are trying to add to the Search head cluster,  updated the configs in shcluster config and other configs.  After adding this node in the cluster , now we have two nodes as pert of  the SH cluster. We can see both the nodes up and running part of the cluster,   when we check it with "splunk show shcluster-status". But, when we check the kvstore status with " splunk show kvstore-status" old nodes shows as captain , but the newly built node is not joining this cluster and giving the below error in the logs. Error in Splunkd log on the search head which has issue.. 12-04-2024 16:36:45.402 +0000 ERROR KVStoreBulletinBoardManager [534432 KVStoreConfigurationThread] - Local KV Store has replication issues. See introspection data and mongod.log for details. Cluster has not been configured on this member. KVStore cluster has not been configured We have configured all the cluster related info on the newly built search head server(server.conf), dont see any configs missing. We also see below error on the SH ui page messages tab.. Failed to synchronize configuration with KVStore cluster. Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: search-head01:8191; the following nodes did not respond affirmatively: search-head01:8191 failed with Error connecting to search-head01:8191 (172.**.***.**:8191) :: caused by :: compression disabled. Anyone else faced this error before...need some support here...