All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I put a \s before and after the : because your example showed the space before, but your sed was replacing a space after. Put the \s* where the space can be. If you want to post examples, use the c... See more...
I put a \s before and after the : because your example showed the space before, but your sed was replacing a space after. Put the \s* where the space can be. If you want to post examples, use the code tag option in the editor </> so you can see exactly what you are posting. Like this...
The issue is we don't want to store cleartext password in the deployment server too. your workaround is very similar to the docker workaround we do currently with slight change. I was checking if th... See more...
The issue is we don't want to store cleartext password in the deployment server too. your workaround is very similar to the docker workaround we do currently with slight change. I was checking if there is any other option to inject it directly into the API of the HF without logging into each of them and without an admin login  surely, this would be an issue in lot of integrations from HF? wanted to see how Splunk handles this internally for other apps
@nareshkareeti try  | tstats summariesonly=true count FROM datamodel=Network_Traffic WHERE (All_Traffic.src_ip=* OR All_Traffic.dest_ip=*) BY All_Traffic.src_ip, All_Traffic.dest_ip | `drop_dm_objec... See more...
@nareshkareeti try  | tstats summariesonly=true count FROM datamodel=Network_Traffic WHERE (All_Traffic.src_ip=* OR All_Traffic.dest_ip=*) BY All_Traffic.src_ip, All_Traffic.dest_ip | `drop_dm_object_name("All_Traffic")`
| tstats summariesonly=true count From datamodel=Network_Traffic WHERE (All_Traffic.src_ip=* OR All_Traffic.dest_ip=*) | 'drop_dm_object_name("All_Traffic")' | lookup IOC_IPs.csv IP AS src_ip OUTPUT ... See more...
| tstats summariesonly=true count From datamodel=Network_Traffic WHERE (All_Traffic.src_ip=* OR All_Traffic.dest_ip=*) | 'drop_dm_object_name("All_Traffic")' | lookup IOC_IPs.csv IP AS src_ip OUTPUT IP AS matched_src | lookup IOC_IPs.csv IP AS dest_ip OUTPUT IP AS matched_dest | where isnotnull (matched_src) OR where isnotnull(matched_dest)   Error in 'SearchParser': Missing a search command before '''. Error at position '121' of search query '| tstats summariesonly=true count From datamodel=N...{snipped} {errorcontext = t_ip=*) | 'drop_dm_ob}'
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Hi all, I think i have manged your's input I have made  two examples and they do what i want | makeresults | eval tz="Summer" | eval PlanDate="15-06-2025T20:10:30" | eval PlanDate_utc=Pla... See more...
Hi all, I think i have manged your's input I have made  two examples and they do what i want | makeresults | eval tz="Summer" | eval PlanDate="15-06-2025T20:10:30" | eval PlanDate_utc=PlanDate . "Z" | eval PlanDate_utc_epoch=strptime(PlanDate_utc, "%d-%m-%YT%H:%M:%S%Z") | eval PlanDate_utc_noepoch=strftime(PlanDate_utc_epoch, "%d-%m-%YT%H:%M:%S%Z") | eval PlanDate_epoch=strptime(PlanDate, "%d-%m-%YT%H:%M:%S") | eval diff=PlanDate_utc_epoch-PlanDate_epoch | eval PlanDate_my_tz=PlanDate_epoch+diff | eval PlanDate_my_tz=strftime(PlanDate_my_tz, "%d-%m-%YT%H:%M:%S") | table tz PlanDate PlanDate_epoch PlanDate_utc_noepoch PlanDate_utc_epoch PlanDate_my_tz diff PlanDate_utc_noepoch | makeresults | eval tz="Winter" | eval PlanDate="31-10-2025T20:10:30" | eval PlanDate_utc=PlanDate . "Z" | eval PlanDate_utc_epoch=strptime(PlanDate_utc, "%d-%m-%YT%H:%M:%S%Z") | eval PlanDate_utc_noepoch=strftime(PlanDate_utc_epoch, "%d-%m-%YT%H:%M:%S%Z") | eval PlanDate_epoch=strptime(PlanDate, "%d-%m-%YT%H:%M:%S") | eval diff=PlanDate_utc_epoch-PlanDate_epoch | eval PlanDate_my_tz=PlanDate_epoch+diff | eval PlanDate_my_tz=strftime(PlanDate_my_tz, "%d-%m-%YT%H:%M:%S") | table tz PlanDate PlanDate_epoch PlanDate_utc_noepoch PlanDate_utc_epoch PlanDate_my_tz diff PlanDate_utc_noepoch Regards, Harry
To convert that UTC timestamp into local time, try this eval command | eval PlanDate=strftime(strptime(PlanDate . "Z", "%d-%m-%YT%H:%M:%S%Z")) Appending "Z" to the timestamp tells Splunk to treat i... See more...
To convert that UTC timestamp into local time, try this eval command | eval PlanDate=strftime(strptime(PlanDate . "Z", "%d-%m-%YT%H:%M:%S%Z")) Appending "Z" to the timestamp tells Splunk to treat it as UTC instead of local time.
Hi @harryvdtol  I dont think it’s possible for Splunk to natively manage timestamp timezones for non _time fields, so what I would look at doing to solve this issue is to create an eval field to con... See more...
Hi @harryvdtol  I dont think it’s possible for Splunk to natively manage timestamp timezones for non _time fields, so what I would look at doing to solve this issue is to create an eval field to convert the planDate into epoch, then determine the amount of hours to add (if appropriate) and then convert back to human readable time string. You should be able to do this in a single eval and then use it to create an eval field so you get the field automatically at search time.  Im not in front of Splunk at the moment to test this but if helps further I’d be happy to have a go at creating the full eval with an example.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @koshyk  are you only needing to deploy to a single HF? If so then using a method to get the encrypted password out of the DS to then manage it as a normal app but with the passwords.conf and oth... See more...
Hi @koshyk  are you only needing to deploy to a single HF? If so then using a method to get the encrypted password out of the DS to then manage it as a normal app but with the passwords.conf and other appropriate confs for the app in its “deployed state”.  If you know the exact realm that will be created then using the api call you suggested would work well. If you are unsure then you could do a “dummy run” using the same app on a local instance and inspect the conf file to see what realm was used. This is probably what I would do for a single HF. You’re ensuring the credentials/sensitive detail are still protected as you’re only putting the encrypted value on the DS. If you do need to distribute to multiple HF then the problem becomes more complicated! You could do the above approach however would need to have the same splunk.secret on each HF. This could impact existing configurations if changed after the HF has been commissioned and in use. The alternative is something I’ve actually had to do before, it’s not particularly pretty but involves creating a custom app with a modular input that runs against each HF it’s deployed to which programmatically creates the inputs and/or encrypted passwords.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hello, I have search for some old posting, but i did not find the proper answers. In Splunk i have a column date field that is named PlanDate and looks like this. 31-10-2025T20:10:30 The format i... See more...
Hello, I have search for some old posting, but i did not find the proper answers. In Splunk i have a column date field that is named PlanDate and looks like this. 31-10-2025T20:10:30 The format is this: DD-MM-YYYYTHH:MM:SS But this field is in the wrong timezone. My timezone is Amsterdam. When summertime starts i need to add 2 hours to this field and in wintertime one hour. How do i do this? Can this be done at search time, or do i need to do this on index-time? I was thinking of making a lookup with daylight saving days for the next years, but i was hoping for a better solution Regards, Harry
Hi all, We’re deploying a custom Splunk app (e.g., my_app) that includes a scripted input to pull data from an internal API (in-house application). This API requires an API token, and we want to sto... See more...
Hi all, We’re deploying a custom Splunk app (e.g., my_app) that includes a scripted input to pull data from an internal API (in-house application). This API requires an API token, and we want to store that token securely using Splunk's passwords.conf mechanism — i.e., the native storage/passwords feature that encrypts the credential on disk. This app needs to be deployed on a Splunk Heavy Forwarder (HF) which is: Managed entirely via Deployment Server Does not have a UI or user access for entering credentials manually But we can get temporary shell access if absolutely needed (e.g., during bootstrap)   What We Know and Have Tried (in dev system without Deployment server) Adding the credential securely via the REST API works fine: curl -k -u admin:changeme \ https://localhost:8089/servicesNS/nobody/search/storage/passwords \ -d name=my_realm:my_api_key \ -d password=actual_api_token ​ and this then stores password encrypted in 'search' app [credential::my_realm:my_api_key:] password = $1$encrypted_string_here However, if we try to deploy a plain-text password via my_app/local/passwords.conf like this: [credential::my_realm:my_api_key:] password = plaintext_token # Splunk does not encrypt if I add it via shell and restart Splunk — and the token remains in clear text on disk, which is not acceptable for production. We also know that encrypting the token on another instance and copying the encrypted config doesn’t work, because encryption depends on the local splunk.secret, which is unique per instance. (Though got a worse case workaround of getting the splunk.secret and run a docker instance, create passwords.conf and copy it back. Quite a long winded option)   What is the best practice to securely bootstrap the credential? Specifically: Should we: Add the credential once via REST API during the shell access window Then copy the resulting passwords.conf into my_app/local/ for persistence? How does other Splunk app that run in Heavy Forwarders (HF) which requires passwords store credentials? Are there any community-recommended patterns (scripts, startup hooks, init-time credential registration, etc.) for this kind of controlled environment?
Hi @sarit_s6  I understand, unfortunately access to the relay logs is not possible. 
I want to monitor such behavior myself and not count on Splunk to update me when such thing is happening 
Hi @tarun2505  Can you please confirm which app you have installed?  TA NextDNS (Community App) (https://splunkbase.splunk.com/app/7042) or  NextDNS API Collector for Splunk (https://splunkbase.s... See more...
Hi @tarun2505  Can you please confirm which app you have installed?  TA NextDNS (Community App) (https://splunkbase.splunk.com/app/7042) or  NextDNS API Collector for Splunk (https://splunkbase.splunk.com/app/7537) Please could you check for any error logs in _internal related to nextdns?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunklearner  In terms of naming conventions - anything which makes sense to you and your team, such as siem_<configID>. Regarding bulk-creation, this is tricky due to the nature in which Splu... See more...
Hi @splunklearner  In terms of naming conventions - anything which makes sense to you and your team, such as siem_<configID>. Regarding bulk-creation, this is tricky due to the nature in which Splunk stores secure credentials, one thing you could script something with something like Curl or Python requests, here is a CURL example on how you could create a single input, you will need to tweak this for your environment and config: curl 'https://yourSplunkInstance:8089/services/data/inputs/TA-Akamai_SIEM/YOUR_INPUT_NAME' \ -H "Authorization: Bearer <your_splunk_token>" \ -d "hostname=testing.cloudsecurity.akamaiapis.net" \ -d "security_configuration_id_s_=1234" \ -d "client_token=clientTokenHere" \ -d "client_secret=clientSecretHere" \ -d "access_token=accessTokenHere" \ -d "initial_epoch_time=optional_InitialEpochTime" \ -d "final_epoch_time=optional_finalEpochTime" \ -d "limit=optional_limit" \ -d "log_level=INFO" \ -d "proxy_host=" \ -d "proxy_port=" \ -d "disable_splunk_cert_check=1" \ -d "interval=60" \ -d "sourcetype=akamaisiem" \ -d "index=main"    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @parthbhawsar  Have you been able to confirm that HF is sending all its events to Splunk Cloud? ie Have you installed the UF app from your Splunk Cloud instance and been able to see the HF's _int... See more...
Hi @parthbhawsar  Have you been able to confirm that HF is sending all its events to Splunk Cloud? ie Have you installed the UF app from your Splunk Cloud instance and been able to see the HF's _internal logs in Splunk Cloud? If so are you able to see any error logs in _internal in relation to the Cisco app? For example: index=_internal "error" ("cisco" OR "fmc")     Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
To achieve detailed monitoring and reporting of gameplay activity for two specific games accessed via PS4 or laptop, using a Squid proxy, you can implement a log monitoring and alerting solution. Sin... See more...
To achieve detailed monitoring and reporting of gameplay activity for two specific games accessed via PS4 or laptop, using a Squid proxy, you can implement a log monitoring and alerting solution. Since you already know the destination domains, you can configure Squid to log all HTTP and HTTPS traffic and then use a script or log analysis tool, or a custom Python script) to filter logs by those domains.
1. Be a bit more precise on how you defined the HF 2. You don't need an index on the HF.
Hello  Please follow the below doc to install a Java agent. This includes all the steps to start an agent and monitor your java application: https://docs.appdynamics.com/appd/23.x/latest/en/applica... See more...
Hello  Please follow the below doc to install a Java agent. This includes all the steps to start an agent and monitor your java application: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/java-agent#id-.JavaAgentv23.1-InstalltheAgent If you are still facing any issue, feel free to create an AppDynamics support case to troubleshoot the issue further
Hello Please create a Appdynamics support case with the controller details for the same https://docs.appdynamics.com/appd/24.x/latest/en/unified-observability-experience-with-splunk/splunk-log-... See more...
Hello Please create a Appdynamics support case with the controller details for the same https://docs.appdynamics.com/appd/24.x/latest/en/unified-observability-experience-with-splunk/splunk-log-observer-connect-for-cisco-appdynamics/configure-cisco-appdynamics-for-splunk-log-observer-connect#id-.ConfigureCiscoAppDynamicsforSplunkLogObserverConnectv25.1-Prerequisites To enable Splunk integration, we need to enable a flag in the backend for the account