All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I want to monitor such behavior myself and not count on Splunk to update me when such thing is happening 
Hi @tarun2505  Can you please confirm which app you have installed?  TA NextDNS (Community App) (https://splunkbase.splunk.com/app/7042) or  NextDNS API Collector for Splunk (https://splunkbase.s... See more...
Hi @tarun2505  Can you please confirm which app you have installed?  TA NextDNS (Community App) (https://splunkbase.splunk.com/app/7042) or  NextDNS API Collector for Splunk (https://splunkbase.splunk.com/app/7537) Please could you check for any error logs in _internal related to nextdns?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunklearner  In terms of naming conventions - anything which makes sense to you and your team, such as siem_<configID>. Regarding bulk-creation, this is tricky due to the nature in which Splu... See more...
Hi @splunklearner  In terms of naming conventions - anything which makes sense to you and your team, such as siem_<configID>. Regarding bulk-creation, this is tricky due to the nature in which Splunk stores secure credentials, one thing you could script something with something like Curl or Python requests, here is a CURL example on how you could create a single input, you will need to tweak this for your environment and config: curl 'https://yourSplunkInstance:8089/services/data/inputs/TA-Akamai_SIEM/YOUR_INPUT_NAME' \ -H "Authorization: Bearer <your_splunk_token>" \ -d "hostname=testing.cloudsecurity.akamaiapis.net" \ -d "security_configuration_id_s_=1234" \ -d "client_token=clientTokenHere" \ -d "client_secret=clientSecretHere" \ -d "access_token=accessTokenHere" \ -d "initial_epoch_time=optional_InitialEpochTime" \ -d "final_epoch_time=optional_finalEpochTime" \ -d "limit=optional_limit" \ -d "log_level=INFO" \ -d "proxy_host=" \ -d "proxy_port=" \ -d "disable_splunk_cert_check=1" \ -d "interval=60" \ -d "sourcetype=akamaisiem" \ -d "index=main"    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @parthbhawsar  Have you been able to confirm that HF is sending all its events to Splunk Cloud? ie Have you installed the UF app from your Splunk Cloud instance and been able to see the HF's _int... See more...
Hi @parthbhawsar  Have you been able to confirm that HF is sending all its events to Splunk Cloud? ie Have you installed the UF app from your Splunk Cloud instance and been able to see the HF's _internal logs in Splunk Cloud? If so are you able to see any error logs in _internal in relation to the Cisco app? For example: index=_internal "error" ("cisco" OR "fmc")     Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
To achieve detailed monitoring and reporting of gameplay activity for two specific games accessed via PS4 or laptop, using a Squid proxy, you can implement a log monitoring and alerting solution. Sin... See more...
To achieve detailed monitoring and reporting of gameplay activity for two specific games accessed via PS4 or laptop, using a Squid proxy, you can implement a log monitoring and alerting solution. Since you already know the destination domains, you can configure Squid to log all HTTP and HTTPS traffic and then use a script or log analysis tool, or a custom Python script) to filter logs by those domains.
1. Be a bit more precise on how you defined the HF 2. You don't need an index on the HF.
Hello  Please follow the below doc to install a Java agent. This includes all the steps to start an agent and monitor your java application: https://docs.appdynamics.com/appd/23.x/latest/en/applica... See more...
Hello  Please follow the below doc to install a Java agent. This includes all the steps to start an agent and monitor your java application: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/java-agent#id-.JavaAgentv23.1-InstalltheAgent If you are still facing any issue, feel free to create an AppDynamics support case to troubleshoot the issue further
Hello Please create a Appdynamics support case with the controller details for the same https://docs.appdynamics.com/appd/24.x/latest/en/unified-observability-experience-with-splunk/splunk-log-... See more...
Hello Please create a Appdynamics support case with the controller details for the same https://docs.appdynamics.com/appd/24.x/latest/en/unified-observability-experience-with-splunk/splunk-log-observer-connect-for-cisco-appdynamics/configure-cisco-appdynamics-for-splunk-log-observer-connect#id-.ConfigureCiscoAppDynamicsforSplunkLogObserverConnectv25.1-Prerequisites To enable Splunk integration, we need to enable a flag in the backend for the account
Hello  You can use the event API to get the Health rule violations https://docs.appdynamics.com/appd/23.x/latest/en/extend-appdynamics/appdynamics-apis/alert-and-respond-api/events-api Let m... See more...
Hello  You can use the event API to get the Health rule violations https://docs.appdynamics.com/appd/23.x/latest/en/extend-appdynamics/appdynamics-apis/alert-and-respond-api/events-api Let me know if you need any specific data. Please add a screenshot for your requirement if needed
Try changing the timezone in your time format TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N%z ("Z" as you have used it is just a character constant - which is used n some date formats) Time variables 
As I always say, do not treat structured data like text.  split is not the tool for this job.  @gcusello suggests INDEXED_EXTRACTION and a way to set up extraction in props.conf.  Short of these, you... See more...
As I always say, do not treat structured data like text.  split is not the tool for this job.  @gcusello suggests INDEXED_EXTRACTION and a way to set up extraction in props.conf.  Short of these, you can also use multikv | multikv  
Hello, I have been trying to configure this application on one of our on-prem Heavy forwarder to be able to ingest our FMC logs to our Splunk Cloud instance. I have so far been able to install the l... See more...
Hello, I have been trying to configure this application on one of our on-prem Heavy forwarder to be able to ingest our FMC logs to our Splunk Cloud instance. I have so far been able to install the latest version of the app on Heavy Forwarder and configure the FMC section via estreamer configuration and was able to save it. I have also created the index both on HF and Splunk Cloud instance. However, I don't seem to be getting the logs into the cloud instance through that source. I am trying to find out what additional steps are needed to be able to make it work. Hopefully, if someone has had similar issue and were able to fix it or know how to resolve it then please let me know.   #ciscosecuritycloud Thanks in advance!   Regards, Parth
as an example  - there is only 1 space - it might be copy paste error that has 2 spaces - but it is only 1 space  in it. Its only some times i get this extra space this is how i get the values in it... See more...
as an example  - there is only 1 space - it might be copy paste error that has 2 spaces - but it is only 1 space  in it. Its only some times i get this extra space this is how i get the values in it Wed 4 Jun 2025 17:16:02  :161 EDT  - sometimes extra space Wed 4 Jun 2025 17:16:02 :161 EDT  - sometimes extra No extra space why do we have 2 \s*: and \s*  - i think we just need 1 \s*: t=strptime(replace(ClintReqRcvdTime, "\s*:\s*", ":"), "%a %d %b %Y %H:%M:%S:%Q %Z")
Hi @Soonerseast , why are you not using the INDEXED_EXTRACTION = csv? anyway, you can put in props.conf: [your_sourcetype] HEADER_FIELD_LINE_NUMBER=1 FIELD_DELIMITER=, FIELD_QUOTE=" if eventually... See more...
Hi @Soonerseast , why are you not using the INDEXED_EXTRACTION = csv? anyway, you can put in props.conf: [your_sourcetype] HEADER_FIELD_LINE_NUMBER=1 FIELD_DELIMITER=, FIELD_QUOTE=" if eventually, yo don't need the header as an event, you can remove it, Ciao. Giuseppe
Please put the query in a code block so it's easier to read and to avoid it being rendered in emoticons. How is this query not working for you?  What are the expected results and what results so you... See more...
Please put the query in a code block so it's easier to read and to avoid it being rendered in emoticons. How is this query not working for you?  What are the expected results and what results so you get? The split function should not be using 'index' as the first argument.  The value of that field, "splunkdata-dev" does not contain any commas.  You probably should use _raw. What is the intention of the subsearch in the table command?
Hi my data is comma delimited   , there  are 2 rows with a header. I'fd like the columns to be split by the comma into a more readable table. Thanks LOG_SEQ,LOG_DATE,LOG_PKG,LOG_PROC,LOG_ID,LOG_MSG,... See more...
Hi my data is comma delimited   , there  are 2 rows with a header. I'fd like the columns to be split by the comma into a more readable table. Thanks LOG_SEQ,LOG_DATE,LOG_PKG,LOG_PROC,LOG_ID,LOG_MSG,LOG_ADDL_MSG,LOG_MSG_TYPE,LOG_SQLERRM,LOG_SQLCODE,LOG_RECEIPT_TABLE_TYPE,LOG_RECEIPT_NUMBER,LOG_BATCH_NUMBER,LOG_RECORDS_ATTEMPTED,sOG_RECORDS_SUCCESSFUL,LOG_RECORDS_ERROR, 37205289,20250612,import_ddd,proposal_dataload (FAS),,GC Batch: 615 Rows Rejected 6,,W,,0,,,,0,0,0 37205306,20250612,hu_givecampus_import_HKS,proposal_dataload (HKS),,GC Batch: 615 - Nothing to process. Skipping DataLoader operation,,W,,0,,,,0,0,0 37205315,20250612,ddd,assignment_dataload (FAS),,GC Batch: 615 Rows Rejected 3,See harris.hu_gc_assignments_csv,W,,0,,,,0,0,0 I've tried a few things , currently I have :  <query>((index="splunkdata-dev") source="/d01/log/log_splunk_feed.log" )  | eval my_field_split = split(index, ",") , log_seq = mvindex(my_field_split, 0) , log_date = mvindex(my_field_split, 1) ,log_pkg= mvindex(my_field_split, 2) ,log_proc = mvindex(my_field_split, 3) ,log_msg = mvindex(my_field_split, 4) ,log_addl_msg= mvindex(my_field_split, 6) , log_msg_type = mvindex(my_field_split, 7) ,log_sqlerrm = mvindex(my_field_split, , log_sqlcode= mvindex(my_field_split, 9)  | table [|makeresults |  eval search ="log_seq log_date log_pkg log_proc log_id log_msg log_addl_msg log_msg_type log_sqlerrm log_sqlcode" | table search ] table  
I see, but with this I am not able to see the data as I want, Iike I want to see the hostname on map and when point out on host it shows average CPU and memory utlization
Hi!  here below what you have requested:  props.conf [GDPR_ZUORA] SHOULD_LINEMERGE=false #LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv KV_MODE=none cate... See more...
Hi!  here below what you have requested:  props.conf [GDPR_ZUORA] SHOULD_LINEMERGE=false #LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv KV_MODE=none category=Structured description=Comma-separated value format. Set header and other settings in "Delimited Settings" pulldown_type=true HEADER_FIELD_LINE_NUMBER = 1 CHECK_FOR_HEADER = true #SHOULD_LINEMERGE = false #FIELD_DELIMITER = , #FIELD_NAMES = date,hostname,app,action,ObjectName,user,operation,value_before,value_after,op_target,description inputs.conf [monitor:///sftp/Zuora/LOG-Zuora-*.log] disabled = false index = sftp_compliance sourcetype = GDPR_ZUORA source = GDPR_ZUORA initCrcLength = 256 First 2 lines of the file monitored: DataOra,ServerSorgente,Applicazione,TipoAzione,TipologiaOperazione,ServerDestinazione,UserID,UserName,OldValue,NewValue,Note 2025-06-05T23:22:01.157Z,,Zuora,Tenant Property,UPDATED,,3,ScheduledJobUser,2025-06-04T22:07:09.005473Z,2025-06-05T22:21:30.642092Z,BIN_DATA_UPDATE_FROM  
i @rishabhpatel20 , see the usage of geostats (https://help.splunk.com/en/splunk-enterprise/search/spl-search-reference/9.4/search-commands/geostats) to display in a map your data. Ciao. Giuseppe
Hi @b17gunnr , these seem to be json files. to use a regex, you must see the row data, maybe there are some backslashes in your logs before quotes: check them to be sure about your TIME_PREFIX. Ci... See more...
Hi @b17gunnr , these seem to be json files. to use a regex, you must see the row data, maybe there are some backslashes in your logs before quotes: check them to be sure about your TIME_PREFIX. Ciao. Giuseppe