All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There may be more than one way to do that using regular expressions.  Here's one of them. | rex field=API_RESOURCE "\/v(?<API_RESOURCE>\d+)" Use this command line in place of the existing eval.
I have the below query I've written - I am used to SQL, SPL is still new to me. I feel like there has to be some way to make this shorter/more efficient - i.e:  Data: API_RESOURCE="/v63.0/gobbledyg... See more...
I have the below query I've written - I am used to SQL, SPL is still new to me. I feel like there has to be some way to make this shorter/more efficient - i.e:  Data: API_RESOURCE="/v63.0/gobbledygook/unrequitededits_somename_request" API_RESOURCE="/v62.0/gobbledygook/unrequitededits_somename_response" API_RESOURCE="/v61.0/gobbledygook/unrequitededits_somename_update" API_RESOURCE="/v63.0/gobbledygook/unrequitededits_somename_delete" API_RESOURCE="/v61.0/gobbledygook/unrequitededits_somename_delete" API_RESOURCE="/v62.0/gobbledygook/unrequitededits_somename_update" API_RESOURCE="/v61.0/gobbledygook/URI_PATH_batch_updates" Original query: index="some_index" API_RESOURCE!="" | eval API_RESOURCE=case( LIKE(API_RESOURCE,"%63%"),"/v63", LIKE(API_RESOURCE,"%62%"),"/v62", LIKE(API_RESOURCE,"%61%"),"/v61",1==1, API_RESOURCE) |stats count by API_RESOURCE Desired query: index="some_index" API_RESOURCE!="" | eval API_RESOURCE=case(LIKE(API_RESOURCE,"%6\d%"),"/v6\d",1==1, API_RESOURCE) |stats count by API_RESOURCE Where the outcome would be the three versions being counted as grouped within their own version (so, v/63 = 2, v/62 = 2, v/61= 2 Every time I run the 'desired query' it completely ignores the wildcard/variable in both the search and replace part of the case statement. Any help would be appreciated, as there are at least 64 current versions, and every time a new one is developed it gets the next highest version number Thanks in advance!
Thank you Rich, If i just remove the extra space that is in the strp function i should be ok eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y") let me t... See more...
Thank you Rich, If i just remove the extra space that is in the strp function i should be ok eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y") let me test this - thanks alot
The strptime function will return null when the format string does not match the value in the field.  Other than meta-characters ('%a', etc.) the format string must match *exactly*, including spaces.... See more...
The strptime function will return null when the format string does not match the value in the field.  Other than meta-characters ('%a', etc.) the format string must match *exactly*, including spaces.  That means including spaces in the format string if they are expected in the data. That said, the sed command should be removing the extra spaces so no accommodation in strptime should be needed.
I have an application writing multiple log files per day - the files are very similar to each other. The file naming convention is logfile_MM-DD-YYYY_hh-mm.log (e.g. logfile_06-12-2025-11-47.log).  ... See more...
I have an application writing multiple log files per day - the files are very similar to each other. The file naming convention is logfile_MM-DD-YYYY_hh-mm.log (e.g. logfile_06-12-2025-11-47.log).  My universal forwarder is set up like this: [monitor://E:\path\logfile*.log] disabled = 0 crcSalt = <SOURCE> index = XXXX sourcetype = XXXX _meta = env::prod-new The first log file of the day is searchable in Splunk, but every file after that is not visible. I have tried using logfile_*.log as the file name. I have also tried without the crcSalt command, but I'm not seeing any difference.  Any suggestions?
sometimes in this field i get an extra space (bold) so we had to add this line and an extra space in the calculated field also - i have to take 2 scenarios and for that we added this line | rex mode... See more...
sometimes in this field i get an extra space (bold) so we had to add this line and an extra space in the calculated field also - i have to take 2 scenarios and for that we added this line | rex mode=sed field=ClintReqRcvdTime "s/: /:/" Wed 4 Jun 2025 17:16:02  :161 EDT Mon 2 Jun 2025 02:52:50  :298 EDT Mon 9 Jun 2025 16:11:05  :860 EDT Tue 10 Jun 2025 14:32:26:243 EDT Wed 11 Jun 2025 13:10:32:515 EDT Wed 11 Jun 2025 17:37:10:008 EDT in the calc field when i use the format - do i have to specify the space for  :161 like eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S  :%3N %Z"), "%m/%d/%Y")
Hi, I have this search query where i aggregate using the stats and sum by few fields... When I run the query in splunk portal i see the data in the events tab but not in the stats tab. So I used the... See more...
Hi, I have this search query where i aggregate using the stats and sum by few fields... When I run the query in splunk portal i see the data in the events tab but not in the stats tab. So I used the fillnull to see which fields are causing the problem. I noticed that these fields where i am using eval are causing the issue as i see 0 inside these columns after using fillnull | eval status_codes_only=if( (status_code>=200 and status_code<300) or status_code>=400,1,0) | search status_codes_only=1 | rex mode=sed field=ClintReqRcvdTime "s/: /:/" | eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y") | eval year_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%Y") | eval month_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%b") | eval week_only=floor(tonumber(strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%d"))/7+1) | eval TwoXXonly=if(status_code>=200 and status_code <300,1,0) | eval FourXXonly=if(status_code>=400 and status_code <500,1,0) | eval FiveXXonly=if(status_code>=500 and status_code <600,1,0) | fillnull date_only,year_only,month_only,week_only,organization,clientId,proxyBasePath,api_name,environment,Total_2xx,Total_4xx,Total_5xx | stats sum(TwoXXonly) as Total_2xx,sum(FourXXonly) as Total_4xx,sum(FiveXXonly) as Total_5xx by date_only,year_only,month_only,week_only,organization,clientId,proxyBasePath,api_name,environment | table date_only,year_only,month_only,week_only,organization,clientId,proxyBasePath,api_name,environment,Total_2xx,Total_4xx,Total_5xx   when i look at the field that i used to get the date_only, year_only, week_only column - i see data something like this in the events Wed 11 Jun 2025 22:57:34:396 EDT Wed 11 Jun 2025 22:56:43:254 EDT Wed 11 Jun 2025 22:56:34:466 EDT Wed 11 Jun 2025 22:56:28:404 EDT
Thank you! I will test option B.
Hi @gcusello that does make sense for the correlation searches, but I am still interested about the impact to the datamodel acceleration itself. Will there be issues in the tsidx files if the acceler... See more...
Hi @gcusello that does make sense for the correlation searches, but I am still interested about the impact to the datamodel acceleration itself. Will there be issues in the tsidx files if the acceleration never fully completes? Or will the next summary pick up where it left off ones it hits the summarization limit.  If it's the latter, does that mean the most recent data is consistently getting delayed in it's acceleration because each acceleration search needs to catch up on the previous debt?
Hi @Trevorator , you have two solutions: delay the time frame: e.g. if you have a delay in acceleration of 5 minutes, you can use in your Correlation Searches as time borders: e.g. from -10m@m to -... See more...
Hi @Trevorator , you have two solutions: delay the time frame: e.g. if you have a delay in acceleration of 5 minutes, you can use in your Correlation Searches as time borders: e.g. from -10m@m to -5m@m instead of from -5m@m to now. otherwise you can use the option summariesonly=false in your tstats command, so the command also reads the not yet accelerated data, but this solution is obviously less performant than the other. Ciao. Giuseppe
Hello there,  In our environment we have datamodel accelerations that are consistently reaching the Max Summarization Search Time, which is the default 3600 seconds. We know the issue is related t... See more...
Hello there,  In our environment we have datamodel accelerations that are consistently reaching the Max Summarization Search Time, which is the default 3600 seconds. We know the issue is related to the resources allocated to the indexing tier as the accelerations are maxing out CPU. It will be remediated, but not immediately.  What I am interested in finding out is how the limit is implemented, if an acceleration never completes, just times out and starts the next summary, is there the potential for some data to not be accelerated?  We also currently have searches using summariesonly=t with a time range of -30m, our max concurrent auto summarizations is 2, so I know there can be up to a 55 minute gap in tstats data, meaning the searches could miss events. While not best practice, could setting the max summarization search time to 1800 seconds be a potential solution?  Thanks for your help!
Hello, 2 questions but the second is more of a keepalived question than it is an SC4S question. First question is what is the advantages of SC4S vs Nginx? From what I can see it doesn't really matt... See more...
Hello, 2 questions but the second is more of a keepalived question than it is an SC4S question. First question is what is the advantages of SC4S vs Nginx? From what I can see it doesn't really matter which you use its more preference, is that correct? My current setup is 2 HF's load balancing data between them and will have SC4S implemented on them for syslog. Second question: I am planning on using keepalived to LB via a VIP, I planned to just track the Podman process in keepalived how you would for apache etc and increment the priority if the process stopped, this would then make keepalived failover the VIP however, the Podman process doesn't get removed if it stops. What is the best way to achieve this, guessing Podman has its own solution but I rarely use Podman so no idea.
Hi, i'm searching for a way to modify my app/dashboard to be able to modify the entries of a table (such as delete/duplicate/copy/multiselect rows). Any suggestions? Maybe i have to look at the scrip... See more...
Hi, i'm searching for a way to modify my app/dashboard to be able to modify the entries of a table (such as delete/duplicate/copy/multiselect rows). Any suggestions? Maybe i have to look at the scripts from the lookup editor app? I really don't know where to start. I know how to write in python but i haven't created a script already. Thanks Dashboard view
Hi all,   Thanks for the quick replies. They help with troubleshooting but the issue ended up being a firewall that isn't documented and i wasnt informed was part of the route whilst trying to orig... See more...
Hi all,   Thanks for the quick replies. They help with troubleshooting but the issue ended up being a firewall that isn't documented and i wasnt informed was part of the route whilst trying to originally diagnose the issue.
HI @TestUser  I think as @gcusello has stated here, there isnt such a tool or capability within Splunk currently that would allow this, but its possibly something that with enough information could ... See more...
HI @TestUser  I think as @gcusello has stated here, there isnt such a tool or capability within Splunk currently that would allow this, but its possibly something that with enough information could be built into an app.  It would rely on a couple of key bits of information though, such as what the usecases for the dashboard are (e.g. what is it you want to visualise, and for whom etc) and also if the data is in a predictable (or ideally CIM compliant) format. e.g. can you reference fields reliably knowing their content (type) and names etc. It might help if you could share a little more about what you are trying to achieve.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks, I really do appreciate!
Thanks for feedback I really do appreciate!
Thanks, I appreciate it!
You have already case open and ongoing with splunk support, so what you expecting that we can offer to you especially you didn't told this to us?
When you say that its a problem at Splunk end, do you mean with Splunk's relay server or within your own cloud environment? SplunkCloud sends emails to a local relay before being sent out of Splunk's... See more...
When you say that its a problem at Splunk end, do you mean with Splunk's relay server or within your own cloud environment? SplunkCloud sends emails to a local relay before being sent out of Splunk's infrastructure.  Even if your alerts fired successfully, it may not show errors sending the emails in your Splunk _internal logs because the failure happens between Splunkd (your actual Splunk process) and an external dependency.  As I said, Splunk Support should be able to access their relay logs and validate where the issue is coming from, but either way - it is not possible for you to directly monitor for failures against the remote SMTP service, you might see some errors if your instance is unable to reach the local relay but also not guaranteed. I wasnt able to find any Splunk apps which monitor the local SMTP connection directly.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing