All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Please create a Appdynamics support case with the controller details for the same https://docs.appdynamics.com/appd/24.x/latest/en/unified-observability-experience-with-splunk/splunk-log-... See more...
Hello Please create a Appdynamics support case with the controller details for the same https://docs.appdynamics.com/appd/24.x/latest/en/unified-observability-experience-with-splunk/splunk-log-observer-connect-for-cisco-appdynamics/configure-cisco-appdynamics-for-splunk-log-observer-connect#id-.ConfigureCiscoAppDynamicsforSplunkLogObserverConnectv25.1-Prerequisites To enable Splunk integration, we need to enable a flag in the backend for the account
Hello  You can use the event API to get the Health rule violations https://docs.appdynamics.com/appd/23.x/latest/en/extend-appdynamics/appdynamics-apis/alert-and-respond-api/events-api Let m... See more...
Hello  You can use the event API to get the Health rule violations https://docs.appdynamics.com/appd/23.x/latest/en/extend-appdynamics/appdynamics-apis/alert-and-respond-api/events-api Let me know if you need any specific data. Please add a screenshot for your requirement if needed
Try changing the timezone in your time format TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N%z ("Z" as you have used it is just a character constant - which is used n some date formats) Time variables 
As I always say, do not treat structured data like text.  split is not the tool for this job.  @gcusello suggests INDEXED_EXTRACTION and a way to set up extraction in props.conf.  Short of these, you... See more...
As I always say, do not treat structured data like text.  split is not the tool for this job.  @gcusello suggests INDEXED_EXTRACTION and a way to set up extraction in props.conf.  Short of these, you can also use multikv | multikv  
Hello, I have been trying to configure this application on one of our on-prem Heavy forwarder to be able to ingest our FMC logs to our Splunk Cloud instance. I have so far been able to install the l... See more...
Hello, I have been trying to configure this application on one of our on-prem Heavy forwarder to be able to ingest our FMC logs to our Splunk Cloud instance. I have so far been able to install the latest version of the app on Heavy Forwarder and configure the FMC section via estreamer configuration and was able to save it. I have also created the index both on HF and Splunk Cloud instance. However, I don't seem to be getting the logs into the cloud instance through that source. I am trying to find out what additional steps are needed to be able to make it work. Hopefully, if someone has had similar issue and were able to fix it or know how to resolve it then please let me know.   #ciscosecuritycloud Thanks in advance!   Regards, Parth
as an example  - there is only 1 space - it might be copy paste error that has 2 spaces - but it is only 1 space  in it. Its only some times i get this extra space this is how i get the values in it... See more...
as an example  - there is only 1 space - it might be copy paste error that has 2 spaces - but it is only 1 space  in it. Its only some times i get this extra space this is how i get the values in it Wed 4 Jun 2025 17:16:02  :161 EDT  - sometimes extra space Wed 4 Jun 2025 17:16:02 :161 EDT  - sometimes extra No extra space why do we have 2 \s*: and \s*  - i think we just need 1 \s*: t=strptime(replace(ClintReqRcvdTime, "\s*:\s*", ":"), "%a %d %b %Y %H:%M:%S:%Q %Z")
Hi @Soonerseast , why are you not using the INDEXED_EXTRACTION = csv? anyway, you can put in props.conf: [your_sourcetype] HEADER_FIELD_LINE_NUMBER=1 FIELD_DELIMITER=, FIELD_QUOTE=" if eventually... See more...
Hi @Soonerseast , why are you not using the INDEXED_EXTRACTION = csv? anyway, you can put in props.conf: [your_sourcetype] HEADER_FIELD_LINE_NUMBER=1 FIELD_DELIMITER=, FIELD_QUOTE=" if eventually, yo don't need the header as an event, you can remove it, Ciao. Giuseppe
Please put the query in a code block so it's easier to read and to avoid it being rendered in emoticons. How is this query not working for you?  What are the expected results and what results so you... See more...
Please put the query in a code block so it's easier to read and to avoid it being rendered in emoticons. How is this query not working for you?  What are the expected results and what results so you get? The split function should not be using 'index' as the first argument.  The value of that field, "splunkdata-dev" does not contain any commas.  You probably should use _raw. What is the intention of the subsearch in the table command?
Hi my data is comma delimited   , there  are 2 rows with a header. I'fd like the columns to be split by the comma into a more readable table. Thanks LOG_SEQ,LOG_DATE,LOG_PKG,LOG_PROC,LOG_ID,LOG_MSG,... See more...
Hi my data is comma delimited   , there  are 2 rows with a header. I'fd like the columns to be split by the comma into a more readable table. Thanks LOG_SEQ,LOG_DATE,LOG_PKG,LOG_PROC,LOG_ID,LOG_MSG,LOG_ADDL_MSG,LOG_MSG_TYPE,LOG_SQLERRM,LOG_SQLCODE,LOG_RECEIPT_TABLE_TYPE,LOG_RECEIPT_NUMBER,LOG_BATCH_NUMBER,LOG_RECORDS_ATTEMPTED,sOG_RECORDS_SUCCESSFUL,LOG_RECORDS_ERROR, 37205289,20250612,import_ddd,proposal_dataload (FAS),,GC Batch: 615 Rows Rejected 6,,W,,0,,,,0,0,0 37205306,20250612,hu_givecampus_import_HKS,proposal_dataload (HKS),,GC Batch: 615 - Nothing to process. Skipping DataLoader operation,,W,,0,,,,0,0,0 37205315,20250612,ddd,assignment_dataload (FAS),,GC Batch: 615 Rows Rejected 3,See harris.hu_gc_assignments_csv,W,,0,,,,0,0,0 I've tried a few things , currently I have :  <query>((index="splunkdata-dev") source="/d01/log/log_splunk_feed.log" )  | eval my_field_split = split(index, ",") , log_seq = mvindex(my_field_split, 0) , log_date = mvindex(my_field_split, 1) ,log_pkg= mvindex(my_field_split, 2) ,log_proc = mvindex(my_field_split, 3) ,log_msg = mvindex(my_field_split, 4) ,log_addl_msg= mvindex(my_field_split, 6) , log_msg_type = mvindex(my_field_split, 7) ,log_sqlerrm = mvindex(my_field_split, , log_sqlcode= mvindex(my_field_split, 9)  | table [|makeresults |  eval search ="log_seq log_date log_pkg log_proc log_id log_msg log_addl_msg log_msg_type log_sqlerrm log_sqlcode" | table search ] table  
I see, but with this I am not able to see the data as I want, Iike I want to see the hostname on map and when point out on host it shows average CPU and memory utlization
Hi!  here below what you have requested:  props.conf [GDPR_ZUORA] SHOULD_LINEMERGE=false #LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv KV_MODE=none cate... See more...
Hi!  here below what you have requested:  props.conf [GDPR_ZUORA] SHOULD_LINEMERGE=false #LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv KV_MODE=none category=Structured description=Comma-separated value format. Set header and other settings in "Delimited Settings" pulldown_type=true HEADER_FIELD_LINE_NUMBER = 1 CHECK_FOR_HEADER = true #SHOULD_LINEMERGE = false #FIELD_DELIMITER = , #FIELD_NAMES = date,hostname,app,action,ObjectName,user,operation,value_before,value_after,op_target,description inputs.conf [monitor:///sftp/Zuora/LOG-Zuora-*.log] disabled = false index = sftp_compliance sourcetype = GDPR_ZUORA source = GDPR_ZUORA initCrcLength = 256 First 2 lines of the file monitored: DataOra,ServerSorgente,Applicazione,TipoAzione,TipologiaOperazione,ServerDestinazione,UserID,UserName,OldValue,NewValue,Note 2025-06-05T23:22:01.157Z,,Zuora,Tenant Property,UPDATED,,3,ScheduledJobUser,2025-06-04T22:07:09.005473Z,2025-06-05T22:21:30.642092Z,BIN_DATA_UPDATE_FROM  
i @rishabhpatel20 , see the usage of geostats (https://help.splunk.com/en/splunk-enterprise/search/spl-search-reference/9.4/search-commands/geostats) to display in a map your data. Ciao. Giuseppe
Hi @b17gunnr , these seem to be json files. to use a regex, you must see the row data, maybe there are some backslashes in your logs before quotes: check them to be sure about your TIME_PREFIX. Ci... See more...
Hi @b17gunnr , these seem to be json files. to use a regex, you must see the row data, maybe there are some backslashes in your logs before quotes: check them to be sure about your TIME_PREFIX. Ciao. Giuseppe
Hello folks, I'm fighting some events in the future and am having some trouble breaking the code for parsing an event. I have the following event (with a little redaction) and have tried some flavor... See more...
Hello folks, I'm fighting some events in the future and am having some trouble breaking the code for parsing an event. I have the following event (with a little redaction) and have tried some flavors of the the stanza below primarily messing with the TIME_PREFIX to no avail. For every change I make (and a Splunk restart after the fact), Splunk just wants the event in UTC and it is not considering my timezone offset. Does anyone have any suggestions or thoughts at to why I cannot get Splunk to recognize that time properly? Thank you.   {"id": 141865, "summary": "User's password changed", "remoteAddress": "X.X.X.X", "created": "2025-06-12T14:13:19.323+0000", "category": "user management", "eventSource": "", "objectItem": {"id": "lots_of_jibberish", "name": "lots_of_jibberish", "typeName": "USER", "parentId": "10000", "parentName": "com.AAA.BBB.CCC.DDD"}, "associatedItems": [{"id": "lots_of_jibberish", "name": "lots_of_jibberish", "typeName": "USER", "parentId": "10000", "parentName": "com.AAA.BBB.CCC.DDD"}]} [my_stanza] TIME_PREFIX = "created": " TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3NZ TZ = UTC
Hello, I have lookup file uploaded and now I want to see the data, I am not able to see it on map , I can see the details in table tough , this is the query and this is the sample of output . Map is ... See more...
Hello, I have lookup file uploaded and now I want to see the data, I am not able to see it on map , I can see the details in table tough , this is the query and this is the sample of output . Map is blank | inputlookup geolocation.csv | eval lat=tonumber(trim(latitude)), lon=tonumber(trim(longitude)) | where isnotnull(lat) AND isnotnull(lon) | table cluster_name lat lon avg_cpu_load avg_mem_load --------------------------------- This is the output I get  cluster_name lat lon avg_cpu_load avg_mem_load ab.com 63.3441204 -8.2673368 96.88 78.55 bc.com 48.9401 62.8346587 55.49 95.49 fg.com 31.5669826 129.9782352 11 19.86
@PrewinThomas  This is what I was worried was the case. You said that "Normally, Splunk does not automatically retry or continue". Does that mean there is a setting that we could enable to have Splu... See more...
@PrewinThomas  This is what I was worried was the case. You said that "Normally, Splunk does not automatically retry or continue". Does that mean there is a setting that we could enable to have Splunk do this to ensure there is no loss in .tsidx files in the short term? The goal is to have all data accelerated for enterprise security searches. I know the long term solution is new machines with better iops but it may be some time before they are requisitioned. 
We are currently pulling Akamai logs to Splunk using akamai add-on in Splunk. As of now I am giving single configuration ID to pull logs. But akamai team asked to pull bunch of config ID logs at a ti... See more...
We are currently pulling Akamai logs to Splunk using akamai add-on in Splunk. As of now I am giving single configuration ID to pull logs. But akamai team asked to pull bunch of config ID logs at a time to save time. But in name field we need to provide Service name (Configuration ID app name) and this will be different for diff config IDs and there will be single index and they will filter based on this name provided. How to on-board them in bulk and how to give naming convention there? Please help me with your inputs.
Hi Team, Could you help me integrating NextDNS (Community App) with Splunk. I have downloaded and configured the app but not able to see logs
why you don't getting the code?
Hi @Trevorator , As @PrewinThomas pointed out, if your acceleration queries exceed the maximum time limit, you should analyze why this happens, in other words, what is your storage performance and w... See more...
Hi @Trevorator , As @PrewinThomas pointed out, if your acceleration queries exceed the maximum time limit, you should analyze why this happens, in other words, what is your storage performance and whether system resources are sufficient. For storage performances, check if the IOPS value of each storage is greater than 800 using an eternal tool like e.g. Bonnie++ and how many CPUs you have in your indexers and Search Heads, you can check this using the Monitoring Console. Ciao. Giuseppe