All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to setup props & transforms in indexers to send PROCTITLE events to null queue i tried below regex but that doesn't seem to work.  props.conf and transforms.conf location:   /app/splunk... See more...
I am trying to setup props & transforms in indexers to send PROCTITLE events to null queue i tried below regex but that doesn't seem to work.  props.conf and transforms.conf location:   /app/splunk/etc/apps/TA-linux_auditd/local/ props.conf [linux_audit] TRANSFORMS-set = discard_proctitle [source::/var/log/audit/audit.log] TRANSFORMS-set = discard_proctitle transforms.conf [discard_proctitle] REGEX = ^type=PROCTITLE.* DEST_KEY = queue FORMAT = nullQueue sample event-   type=PROCTITLE msg=audit(1750049138.587:1710xxxx): proctitle=737368643A206165705F667470757372205B70726xxxxx   type=PROCTITLE msg=audit(1750049130.891:1710xxxx): proctitle="(systemd)" type=PROCTITLE msg=audit(1750049102.068:377xxxx): proctitle="/usr/lib/systemd/systemd-logind" Could someone help me to fix this issue?  
Hello Splunkers The time selector is not visible for a specific user, whereas it is visible for the admin role. Could anyone please suggest which capability needs to be added to the user's role to... See more...
Hello Splunkers The time selector is not visible for a specific user, whereas it is visible for the admin role. Could anyone please suggest which capability needs to be added to the user's role to make the time selector visible on the dashboard?   Time selector is visible for the admin user.  
There was an dashboard is created in Splunk Enterprise with using only HTML code along with Javascript and CSS file. Can you please help me clarify. 1.Is that Splunk Cloud will Support CSS and Js fi... See more...
There was an dashboard is created in Splunk Enterprise with using only HTML code along with Javascript and CSS file. Can you please help me clarify. 1.Is that Splunk Cloud will Support CSS and Js file. 2.Can we write complex HTML Code in Splunk Cloud Dashboard. If not possible means what will be alternative solution for above, can anyone please clarify. Also is it possible to add audio file in Studio dashboard. Since if there was drop in Success rate of transaction busser sound be initiated. 
| tstats summariesonly=true count From datamodel=Network_Traffic WHERE (All_Traffic.src_ip=* OR All_Traffic.dest_ip=*) | 'drop_dm_object_name("All_Traffic")' | lookup IOC_IPs.csv IP AS src_ip OUTPUT ... See more...
| tstats summariesonly=true count From datamodel=Network_Traffic WHERE (All_Traffic.src_ip=* OR All_Traffic.dest_ip=*) | 'drop_dm_object_name("All_Traffic")' | lookup IOC_IPs.csv IP AS src_ip OUTPUT IP AS matched_src | lookup IOC_IPs.csv IP AS dest_ip OUTPUT IP AS matched_dest | where isnotnull (matched_src) OR where isnotnull(matched_dest)   Error in 'SearchParser': Missing a search command before '''. Error at position '121' of search query '| tstats summariesonly=true count From datamodel=N...{snipped} {errorcontext = t_ip=*) | 'drop_dm_ob}'
Hello, I have search for some old posting, but i did not find the proper answers. In Splunk i have a column date field that is named PlanDate and looks like this. 31-10-2025T20:10:30 The format i... See more...
Hello, I have search for some old posting, but i did not find the proper answers. In Splunk i have a column date field that is named PlanDate and looks like this. 31-10-2025T20:10:30 The format is this: DD-MM-YYYYTHH:MM:SS But this field is in the wrong timezone. My timezone is Amsterdam. When summertime starts i need to add 2 hours to this field and in wintertime one hour. How do i do this? Can this be done at search time, or do i need to do this on index-time? I was thinking of making a lookup with daylight saving days for the next years, but i was hoping for a better solution Regards, Harry
Hi all, We’re deploying a custom Splunk app (e.g., my_app) that includes a scripted input to pull data from an internal API (in-house application). This API requires an API token, and we want to sto... See more...
Hi all, We’re deploying a custom Splunk app (e.g., my_app) that includes a scripted input to pull data from an internal API (in-house application). This API requires an API token, and we want to store that token securely using Splunk's passwords.conf mechanism — i.e., the native storage/passwords feature that encrypts the credential on disk. This app needs to be deployed on a Splunk Heavy Forwarder (HF) which is: Managed entirely via Deployment Server Does not have a UI or user access for entering credentials manually But we can get temporary shell access if absolutely needed (e.g., during bootstrap)   What We Know and Have Tried (in dev system without Deployment server) Adding the credential securely via the REST API works fine: curl -k -u admin:changeme \ https://localhost:8089/servicesNS/nobody/search/storage/passwords \ -d name=my_realm:my_api_key \ -d password=actual_api_token ​ and this then stores password encrypted in 'search' app [credential::my_realm:my_api_key:] password = $1$encrypted_string_here However, if we try to deploy a plain-text password via my_app/local/passwords.conf like this: [credential::my_realm:my_api_key:] password = plaintext_token # Splunk does not encrypt if I add it via shell and restart Splunk — and the token remains in clear text on disk, which is not acceptable for production. We also know that encrypting the token on another instance and copying the encrypted config doesn’t work, because encryption depends on the local splunk.secret, which is unique per instance. (Though got a worse case workaround of getting the splunk.secret and run a docker instance, create passwords.conf and copy it back. Quite a long winded option)   What is the best practice to securely bootstrap the credential? Specifically: Should we: Add the credential once via REST API during the shell access window Then copy the resulting passwords.conf into my_app/local/ for persistence? How does other Splunk app that run in Heavy Forwarders (HF) which requires passwords store credentials? Are there any community-recommended patterns (scripts, startup hooks, init-time credential registration, etc.) for this kind of controlled environment?
Hello, I have been trying to configure this application on one of our on-prem Heavy forwarder to be able to ingest our FMC logs to our Splunk Cloud instance. I have so far been able to install the l... See more...
Hello, I have been trying to configure this application on one of our on-prem Heavy forwarder to be able to ingest our FMC logs to our Splunk Cloud instance. I have so far been able to install the latest version of the app on Heavy Forwarder and configure the FMC section via estreamer configuration and was able to save it. I have also created the index both on HF and Splunk Cloud instance. However, I don't seem to be getting the logs into the cloud instance through that source. I am trying to find out what additional steps are needed to be able to make it work. Hopefully, if someone has had similar issue and were able to fix it or know how to resolve it then please let me know.   #ciscosecuritycloud Thanks in advance!   Regards, Parth
Hi my data is comma delimited   , there  are 2 rows with a header. I'fd like the columns to be split by the comma into a more readable table. Thanks LOG_SEQ,LOG_DATE,LOG_PKG,LOG_PROC,LOG_ID,LOG_MSG,... See more...
Hi my data is comma delimited   , there  are 2 rows with a header. I'fd like the columns to be split by the comma into a more readable table. Thanks LOG_SEQ,LOG_DATE,LOG_PKG,LOG_PROC,LOG_ID,LOG_MSG,LOG_ADDL_MSG,LOG_MSG_TYPE,LOG_SQLERRM,LOG_SQLCODE,LOG_RECEIPT_TABLE_TYPE,LOG_RECEIPT_NUMBER,LOG_BATCH_NUMBER,LOG_RECORDS_ATTEMPTED,sOG_RECORDS_SUCCESSFUL,LOG_RECORDS_ERROR, 37205289,20250612,import_ddd,proposal_dataload (FAS),,GC Batch: 615 Rows Rejected 6,,W,,0,,,,0,0,0 37205306,20250612,hu_givecampus_import_HKS,proposal_dataload (HKS),,GC Batch: 615 - Nothing to process. Skipping DataLoader operation,,W,,0,,,,0,0,0 37205315,20250612,ddd,assignment_dataload (FAS),,GC Batch: 615 Rows Rejected 3,See harris.hu_gc_assignments_csv,W,,0,,,,0,0,0 I've tried a few things , currently I have :  <query>((index="splunkdata-dev") source="/d01/log/log_splunk_feed.log" )  | eval my_field_split = split(index, ",") , log_seq = mvindex(my_field_split, 0) , log_date = mvindex(my_field_split, 1) ,log_pkg= mvindex(my_field_split, 2) ,log_proc = mvindex(my_field_split, 3) ,log_msg = mvindex(my_field_split, 4) ,log_addl_msg= mvindex(my_field_split, 6) , log_msg_type = mvindex(my_field_split, 7) ,log_sqlerrm = mvindex(my_field_split, , log_sqlcode= mvindex(my_field_split, 9)  | table [|makeresults |  eval search ="log_seq log_date log_pkg log_proc log_id log_msg log_addl_msg log_msg_type log_sqlerrm log_sqlcode" | table search ] table  
Hello folks, I'm fighting some events in the future and am having some trouble breaking the code for parsing an event. I have the following event (with a little redaction) and have tried some flavor... See more...
Hello folks, I'm fighting some events in the future and am having some trouble breaking the code for parsing an event. I have the following event (with a little redaction) and have tried some flavors of the the stanza below primarily messing with the TIME_PREFIX to no avail. For every change I make (and a Splunk restart after the fact), Splunk just wants the event in UTC and it is not considering my timezone offset. Does anyone have any suggestions or thoughts at to why I cannot get Splunk to recognize that time properly? Thank you.   {"id": 141865, "summary": "User's password changed", "remoteAddress": "X.X.X.X", "created": "2025-06-12T14:13:19.323+0000", "category": "user management", "eventSource": "", "objectItem": {"id": "lots_of_jibberish", "name": "lots_of_jibberish", "typeName": "USER", "parentId": "10000", "parentName": "com.AAA.BBB.CCC.DDD"}, "associatedItems": [{"id": "lots_of_jibberish", "name": "lots_of_jibberish", "typeName": "USER", "parentId": "10000", "parentName": "com.AAA.BBB.CCC.DDD"}]} [my_stanza] TIME_PREFIX = "created": " TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3NZ TZ = UTC
Hello, I have lookup file uploaded and now I want to see the data, I am not able to see it on map , I can see the details in table tough , this is the query and this is the sample of output . Map is ... See more...
Hello, I have lookup file uploaded and now I want to see the data, I am not able to see it on map , I can see the details in table tough , this is the query and this is the sample of output . Map is blank | inputlookup geolocation.csv | eval lat=tonumber(trim(latitude)), lon=tonumber(trim(longitude)) | where isnotnull(lat) AND isnotnull(lon) | table cluster_name lat lon avg_cpu_load avg_mem_load --------------------------------- This is the output I get  cluster_name lat lon avg_cpu_load avg_mem_load ab.com 63.3441204 -8.2673368 96.88 78.55 bc.com 48.9401 62.8346587 55.49 95.49 fg.com 31.5669826 129.9782352 11 19.86
We are currently pulling Akamai logs to Splunk using akamai add-on in Splunk. As of now I am giving single configuration ID to pull logs. But akamai team asked to pull bunch of config ID logs at a ti... See more...
We are currently pulling Akamai logs to Splunk using akamai add-on in Splunk. As of now I am giving single configuration ID to pull logs. But akamai team asked to pull bunch of config ID logs at a time to save time. But in name field we need to provide Service name (Configuration ID app name) and this will be different for diff config IDs and there will be single index and they will filter based on this name provided. How to on-board them in bulk and how to give naming convention there? Please help me with your inputs.
Hi Team, Could you help me integrating NextDNS (Community App) with Splunk. I have downloaded and configured the app but not able to see logs
Hi Everyone, I am trying to install SplunkUI to explore it, the documentation I followed is from the following link https://splunkui.splunk.com/Packages/create/CreatingSplunkApps And when I reached ... See more...
Hi Everyone, I am trying to install SplunkUI to explore it, the documentation I followed is from the following link https://splunkui.splunk.com/Packages/create/CreatingSplunkApps And when I reached the 'yarn start' stage, everything went smoothly. However, when I restarted Splunk to see the results, I couldn't see any changes from before. Does anyone know about this issue?  
I have the below query I've written - I am used to SQL, SPL is still new to me. I feel like there has to be some way to make this shorter/more efficient - i.e:  Data: API_RESOURCE="/v63.0/gobbledyg... See more...
I have the below query I've written - I am used to SQL, SPL is still new to me. I feel like there has to be some way to make this shorter/more efficient - i.e:  Data: API_RESOURCE="/v63.0/gobbledygook/unrequitededits_somename_request" API_RESOURCE="/v62.0/gobbledygook/unrequitededits_somename_response" API_RESOURCE="/v61.0/gobbledygook/unrequitededits_somename_update" API_RESOURCE="/v63.0/gobbledygook/unrequitededits_somename_delete" API_RESOURCE="/v61.0/gobbledygook/unrequitededits_somename_delete" API_RESOURCE="/v62.0/gobbledygook/unrequitededits_somename_update" API_RESOURCE="/v61.0/gobbledygook/URI_PATH_batch_updates" Original query: index="some_index" API_RESOURCE!="" | eval API_RESOURCE=case( LIKE(API_RESOURCE,"%63%"),"/v63", LIKE(API_RESOURCE,"%62%"),"/v62", LIKE(API_RESOURCE,"%61%"),"/v61",1==1, API_RESOURCE) |stats count by API_RESOURCE Desired query: index="some_index" API_RESOURCE!="" | eval API_RESOURCE=case(LIKE(API_RESOURCE,"%6\d%"),"/v6\d",1==1, API_RESOURCE) |stats count by API_RESOURCE Where the outcome would be the three versions being counted as grouped within their own version (so, v/63 = 2, v/62 = 2, v/61= 2 Every time I run the 'desired query' it completely ignores the wildcard/variable in both the search and replace part of the case statement. Any help would be appreciated, as there are at least 64 current versions, and every time a new one is developed it gets the next highest version number Thanks in advance!
I have an application writing multiple log files per day - the files are very similar to each other. The file naming convention is logfile_MM-DD-YYYY_hh-mm.log (e.g. logfile_06-12-2025-11-47.log).  ... See more...
I have an application writing multiple log files per day - the files are very similar to each other. The file naming convention is logfile_MM-DD-YYYY_hh-mm.log (e.g. logfile_06-12-2025-11-47.log).  My universal forwarder is set up like this: [monitor://E:\path\logfile*.log] disabled = 0 crcSalt = <SOURCE> index = XXXX sourcetype = XXXX _meta = env::prod-new The first log file of the day is searchable in Splunk, but every file after that is not visible. I have tried using logfile_*.log as the file name. I have also tried without the crcSalt command, but I'm not seeing any difference.  Any suggestions?
Hi, I have this search query where i aggregate using the stats and sum by few fields... When I run the query in splunk portal i see the data in the events tab but not in the stats tab. So I used the... See more...
Hi, I have this search query where i aggregate using the stats and sum by few fields... When I run the query in splunk portal i see the data in the events tab but not in the stats tab. So I used the fillnull to see which fields are causing the problem. I noticed that these fields where i am using eval are causing the issue as i see 0 inside these columns after using fillnull | eval status_codes_only=if( (status_code>=200 and status_code<300) or status_code>=400,1,0) | search status_codes_only=1 | rex mode=sed field=ClintReqRcvdTime "s/: /:/" | eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y") | eval year_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%Y") | eval month_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%b") | eval week_only=floor(tonumber(strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%d"))/7+1) | eval TwoXXonly=if(status_code>=200 and status_code <300,1,0) | eval FourXXonly=if(status_code>=400 and status_code <500,1,0) | eval FiveXXonly=if(status_code>=500 and status_code <600,1,0) | fillnull date_only,year_only,month_only,week_only,organization,clientId,proxyBasePath,api_name,environment,Total_2xx,Total_4xx,Total_5xx | stats sum(TwoXXonly) as Total_2xx,sum(FourXXonly) as Total_4xx,sum(FiveXXonly) as Total_5xx by date_only,year_only,month_only,week_only,organization,clientId,proxyBasePath,api_name,environment | table date_only,year_only,month_only,week_only,organization,clientId,proxyBasePath,api_name,environment,Total_2xx,Total_4xx,Total_5xx   when i look at the field that i used to get the date_only, year_only, week_only column - i see data something like this in the events Wed 11 Jun 2025 22:57:34:396 EDT Wed 11 Jun 2025 22:56:43:254 EDT Wed 11 Jun 2025 22:56:34:466 EDT Wed 11 Jun 2025 22:56:28:404 EDT
Hello there,  In our environment we have datamodel accelerations that are consistently reaching the Max Summarization Search Time, which is the default 3600 seconds. We know the issue is related t... See more...
Hello there,  In our environment we have datamodel accelerations that are consistently reaching the Max Summarization Search Time, which is the default 3600 seconds. We know the issue is related to the resources allocated to the indexing tier as the accelerations are maxing out CPU. It will be remediated, but not immediately.  What I am interested in finding out is how the limit is implemented, if an acceleration never completes, just times out and starts the next summary, is there the potential for some data to not be accelerated?  We also currently have searches using summariesonly=t with a time range of -30m, our max concurrent auto summarizations is 2, so I know there can be up to a 55 minute gap in tstats data, meaning the searches could miss events. While not best practice, could setting the max summarization search time to 1800 seconds be a potential solution?  Thanks for your help!
Hello, 2 questions but the second is more of a keepalived question than it is an SC4S question. First question is what is the advantages of SC4S vs Nginx? From what I can see it doesn't really matt... See more...
Hello, 2 questions but the second is more of a keepalived question than it is an SC4S question. First question is what is the advantages of SC4S vs Nginx? From what I can see it doesn't really matter which you use its more preference, is that correct? My current setup is 2 HF's load balancing data between them and will have SC4S implemented on them for syslog. Second question: I am planning on using keepalived to LB via a VIP, I planned to just track the Podman process in keepalived how you would for apache etc and increment the priority if the process stopped, this would then make keepalived failover the VIP however, the Podman process doesn't get removed if it stops. What is the best way to achieve this, guessing Podman has its own solution but I rarely use Podman so no idea.
Hi, i'm searching for a way to modify my app/dashboard to be able to modify the entries of a table (such as delete/duplicate/copy/multiselect rows). Any suggestions? Maybe i have to look at the scrip... See more...
Hi, i'm searching for a way to modify my app/dashboard to be able to modify the entries of a table (such as delete/duplicate/copy/multiselect rows). Any suggestions? Maybe i have to look at the scripts from the lookup editor app? I really don't know where to start. I know how to write in python but i haven't created a script already. Thanks Dashboard view
Hi Everyone, I encountered an issue while creating a new component for SplunkUI. I have followed the documentation tutorial https://splunkui.splunk.com/Packages/create/CreatingSplunkApps as well as... See more...
Hi Everyone, I encountered an issue while creating a new component for SplunkUI. I have followed the documentation tutorial https://splunkui.splunk.com/Packages/create/CreatingSplunkApps as well as the writings from others here https://blog.scrt.ch/2023/01/03/getting-started-with-splunkui/  but I am facing an error as shown in the image below. My Setup : node -v v18.19.1 npm -v 9.2.0 npx yarn -v 1.22.22