All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There was an dashboard is created in Splunk Enterprise with using only HTML code along with Javascript and CSS file. Can you please help me clarify. 1.Is that Splunk Cloud will Support CSS and Js fi... See more...
There was an dashboard is created in Splunk Enterprise with using only HTML code along with Javascript and CSS file. Can you please help me clarify. 1.Is that Splunk Cloud will Support CSS and Js file. 2.Can we write complex HTML Code in Splunk Cloud Dashboard. If not possible means what will be alternative solution for above, can anyone please clarify. Also is it possible to add audio file in Studio dashboard. Since if there was drop in Success rate of transaction busser sound be initiated. 
Still not resolve the issue, could you please help any one.
@parthbhawsar  We have recently configured the Cisco FMC and successfully integrated it with Splunk. Could you please check the error you are encountering in Splunk so that I can assist you further?... See more...
@parthbhawsar  We have recently configured the Cisco FMC and successfully integrated it with Splunk. Could you please check the error you are encountering in Splunk so that I can assist you further? If you continue to face any issues, I would recommend reaching out to the Cisco TAC team for additional support.
@Splunkers2 Check this https://www.reddit.com/r/Splunk/comments/17msvh2/misp_integration_error/ 
If you're sending passwords via deployment apps, they generally need to be stored in plaintext on the Deployment Server. If this isn't acceptable in your environment, you'll probably end up with a wo... See more...
If you're sending passwords via deployment apps, they generally need to be stored in plaintext on the Deployment Server. If this isn't acceptable in your environment, you'll probably end up with a work-around. Use the REST API Remotely Are you able to access the HF remotely over port 8089 after the HF has been deployed? If so, the REST API method could still work. Instead of using 'localhost' as your hostname: curl -k -u admin:changeme \ https://hf.mydomain.com:8089/servicesNS/nobody/search/storage/passwords \ -d name=my_realm:my_api_key \ -d password=actual_api_token   Pre-Encrypt the Secret Instead of copying the splunk.secret to a Docker container, encrypt the secret locally on the HF. If you have shell access, you can run something like: read creds && $SPLUNK_HOME/bin/splunk show-encrypted --value "${creds}"   This will generate an encrypted version of $creds that is decryptable by that server's splunk.secret file. Put the string in the appropriate place in passwords.conf. Alternately, after obtaining the encrypted string, you could insert it into the deployment app on the DS, and leave it there in encrypted form (assuming, as mentioned above, you only need the credential on this one HF). Then the DS can push the entire app, encrypted password included, to the HF.
Would we be able to automatically run the misp_alert_sighting command based on traffic matching?
Your SPL has "tick" marks round the macro drop_dm_object_name that are single quotes ('), whereas you need to use the backtick character (`) | `drop_dm_object_name("All_Traffic")`  
I put a \s before and after the : because your example showed the space before, but your sed was replacing a space after. Put the \s* where the space can be. If you want to post examples, use the c... See more...
I put a \s before and after the : because your example showed the space before, but your sed was replacing a space after. Put the \s* where the space can be. If you want to post examples, use the code tag option in the editor </> so you can see exactly what you are posting. Like this...
The issue is we don't want to store cleartext password in the deployment server too. your workaround is very similar to the docker workaround we do currently with slight change. I was checking if th... See more...
The issue is we don't want to store cleartext password in the deployment server too. your workaround is very similar to the docker workaround we do currently with slight change. I was checking if there is any other option to inject it directly into the API of the HF without logging into each of them and without an admin login  surely, this would be an issue in lot of integrations from HF? wanted to see how Splunk handles this internally for other apps
@nareshkareeti try  | tstats summariesonly=true count FROM datamodel=Network_Traffic WHERE (All_Traffic.src_ip=* OR All_Traffic.dest_ip=*) BY All_Traffic.src_ip, All_Traffic.dest_ip | `drop_dm_objec... See more...
@nareshkareeti try  | tstats summariesonly=true count FROM datamodel=Network_Traffic WHERE (All_Traffic.src_ip=* OR All_Traffic.dest_ip=*) BY All_Traffic.src_ip, All_Traffic.dest_ip | `drop_dm_object_name("All_Traffic")`
| tstats summariesonly=true count From datamodel=Network_Traffic WHERE (All_Traffic.src_ip=* OR All_Traffic.dest_ip=*) | 'drop_dm_object_name("All_Traffic")' | lookup IOC_IPs.csv IP AS src_ip OUTPUT ... See more...
| tstats summariesonly=true count From datamodel=Network_Traffic WHERE (All_Traffic.src_ip=* OR All_Traffic.dest_ip=*) | 'drop_dm_object_name("All_Traffic")' | lookup IOC_IPs.csv IP AS src_ip OUTPUT IP AS matched_src | lookup IOC_IPs.csv IP AS dest_ip OUTPUT IP AS matched_dest | where isnotnull (matched_src) OR where isnotnull(matched_dest)   Error in 'SearchParser': Missing a search command before '''. Error at position '121' of search query '| tstats summariesonly=true count From datamodel=N...{snipped} {errorcontext = t_ip=*) | 'drop_dm_ob}'
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
Hi all, I think i have manged your's input I have made  two examples and they do what i want | makeresults | eval tz="Summer" | eval PlanDate="15-06-2025T20:10:30" | eval PlanDate_utc=Pla... See more...
Hi all, I think i have manged your's input I have made  two examples and they do what i want | makeresults | eval tz="Summer" | eval PlanDate="15-06-2025T20:10:30" | eval PlanDate_utc=PlanDate . "Z" | eval PlanDate_utc_epoch=strptime(PlanDate_utc, "%d-%m-%YT%H:%M:%S%Z") | eval PlanDate_utc_noepoch=strftime(PlanDate_utc_epoch, "%d-%m-%YT%H:%M:%S%Z") | eval PlanDate_epoch=strptime(PlanDate, "%d-%m-%YT%H:%M:%S") | eval diff=PlanDate_utc_epoch-PlanDate_epoch | eval PlanDate_my_tz=PlanDate_epoch+diff | eval PlanDate_my_tz=strftime(PlanDate_my_tz, "%d-%m-%YT%H:%M:%S") | table tz PlanDate PlanDate_epoch PlanDate_utc_noepoch PlanDate_utc_epoch PlanDate_my_tz diff PlanDate_utc_noepoch | makeresults | eval tz="Winter" | eval PlanDate="31-10-2025T20:10:30" | eval PlanDate_utc=PlanDate . "Z" | eval PlanDate_utc_epoch=strptime(PlanDate_utc, "%d-%m-%YT%H:%M:%S%Z") | eval PlanDate_utc_noepoch=strftime(PlanDate_utc_epoch, "%d-%m-%YT%H:%M:%S%Z") | eval PlanDate_epoch=strptime(PlanDate, "%d-%m-%YT%H:%M:%S") | eval diff=PlanDate_utc_epoch-PlanDate_epoch | eval PlanDate_my_tz=PlanDate_epoch+diff | eval PlanDate_my_tz=strftime(PlanDate_my_tz, "%d-%m-%YT%H:%M:%S") | table tz PlanDate PlanDate_epoch PlanDate_utc_noepoch PlanDate_utc_epoch PlanDate_my_tz diff PlanDate_utc_noepoch Regards, Harry
To convert that UTC timestamp into local time, try this eval command | eval PlanDate=strftime(strptime(PlanDate . "Z", "%d-%m-%YT%H:%M:%S%Z")) Appending "Z" to the timestamp tells Splunk to treat i... See more...
To convert that UTC timestamp into local time, try this eval command | eval PlanDate=strftime(strptime(PlanDate . "Z", "%d-%m-%YT%H:%M:%S%Z")) Appending "Z" to the timestamp tells Splunk to treat it as UTC instead of local time.
Hi @harryvdtol  I dont think it’s possible for Splunk to natively manage timestamp timezones for non _time fields, so what I would look at doing to solve this issue is to create an eval field to con... See more...
Hi @harryvdtol  I dont think it’s possible for Splunk to natively manage timestamp timezones for non _time fields, so what I would look at doing to solve this issue is to create an eval field to convert the planDate into epoch, then determine the amount of hours to add (if appropriate) and then convert back to human readable time string. You should be able to do this in a single eval and then use it to create an eval field so you get the field automatically at search time.  Im not in front of Splunk at the moment to test this but if helps further I’d be happy to have a go at creating the full eval with an example.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @koshyk  are you only needing to deploy to a single HF? If so then using a method to get the encrypted password out of the DS to then manage it as a normal app but with the passwords.conf and oth... See more...
Hi @koshyk  are you only needing to deploy to a single HF? If so then using a method to get the encrypted password out of the DS to then manage it as a normal app but with the passwords.conf and other appropriate confs for the app in its “deployed state”.  If you know the exact realm that will be created then using the api call you suggested would work well. If you are unsure then you could do a “dummy run” using the same app on a local instance and inspect the conf file to see what realm was used. This is probably what I would do for a single HF. You’re ensuring the credentials/sensitive detail are still protected as you’re only putting the encrypted value on the DS. If you do need to distribute to multiple HF then the problem becomes more complicated! You could do the above approach however would need to have the same splunk.secret on each HF. This could impact existing configurations if changed after the HF has been commissioned and in use. The alternative is something I’ve actually had to do before, it’s not particularly pretty but involves creating a custom app with a modular input that runs against each HF it’s deployed to which programmatically creates the inputs and/or encrypted passwords.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hello, I have search for some old posting, but i did not find the proper answers. In Splunk i have a column date field that is named PlanDate and looks like this. 31-10-2025T20:10:30 The format i... See more...
Hello, I have search for some old posting, but i did not find the proper answers. In Splunk i have a column date field that is named PlanDate and looks like this. 31-10-2025T20:10:30 The format is this: DD-MM-YYYYTHH:MM:SS But this field is in the wrong timezone. My timezone is Amsterdam. When summertime starts i need to add 2 hours to this field and in wintertime one hour. How do i do this? Can this be done at search time, or do i need to do this on index-time? I was thinking of making a lookup with daylight saving days for the next years, but i was hoping for a better solution Regards, Harry
Hi all, We’re deploying a custom Splunk app (e.g., my_app) that includes a scripted input to pull data from an internal API (in-house application). This API requires an API token, and we want to sto... See more...
Hi all, We’re deploying a custom Splunk app (e.g., my_app) that includes a scripted input to pull data from an internal API (in-house application). This API requires an API token, and we want to store that token securely using Splunk's passwords.conf mechanism — i.e., the native storage/passwords feature that encrypts the credential on disk. This app needs to be deployed on a Splunk Heavy Forwarder (HF) which is: Managed entirely via Deployment Server Does not have a UI or user access for entering credentials manually But we can get temporary shell access if absolutely needed (e.g., during bootstrap)   What We Know and Have Tried (in dev system without Deployment server) Adding the credential securely via the REST API works fine: curl -k -u admin:changeme \ https://localhost:8089/servicesNS/nobody/search/storage/passwords \ -d name=my_realm:my_api_key \ -d password=actual_api_token ​ and this then stores password encrypted in 'search' app [credential::my_realm:my_api_key:] password = $1$encrypted_string_here However, if we try to deploy a plain-text password via my_app/local/passwords.conf like this: [credential::my_realm:my_api_key:] password = plaintext_token # Splunk does not encrypt if I add it via shell and restart Splunk — and the token remains in clear text on disk, which is not acceptable for production. We also know that encrypting the token on another instance and copying the encrypted config doesn’t work, because encryption depends on the local splunk.secret, which is unique per instance. (Though got a worse case workaround of getting the splunk.secret and run a docker instance, create passwords.conf and copy it back. Quite a long winded option)   What is the best practice to securely bootstrap the credential? Specifically: Should we: Add the credential once via REST API during the shell access window Then copy the resulting passwords.conf into my_app/local/ for persistence? How does other Splunk app that run in Heavy Forwarders (HF) which requires passwords store credentials? Are there any community-recommended patterns (scripts, startup hooks, init-time credential registration, etc.) for this kind of controlled environment?
Hi @sarit_s6  I understand, unfortunately access to the relay logs is not possible. 
I want to monitor such behavior myself and not count on Splunk to update me when such thing is happening