All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@bspalding  Use initCrcLength if your files are extremely similar at the start and the UF is getting confused Eg: Note-Change initCrcLength value based on your similar header size [monitor://E:\p... See more...
@bspalding  Use initCrcLength if your files are extremely similar at the start and the UF is getting confused Eg: Note-Change initCrcLength value based on your similar header size [monitor://E:\path\logfile*.log] disabled = 0 initCrcLength = 256 crcSalt = <UNIQUESOURCE> index = XXXX sourcetype = XXXX _meta = env::prod-new Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
Fix the yarn version, install yarn correctly  sudo npm install -g yarn@1.22.19  
My bad, the solution is insatll yarn correctly. my yarn is wrong installation
Do the files have a common header?  If so, you may need to set initCrcLength to a value larger than the header.
And if you want the full version number after . then try: | rex field=API_RESOURCE "^/(?<version>v[\d\.]+)\/"  Did this answer help you? If so, please consider: Adding karma to show it was usef... See more...
And if you want the full version number after . then try: | rex field=API_RESOURCE "^/(?<version>v[\d\.]+)\/"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ripvw32  Use rex with a regular expression to extract or normalize the version segment efficiently, instead of using multiple LIKE or case statements. For exxample: | makeresults | eval API_RE... See more...
Hi @ripvw32  Use rex with a regular expression to extract or normalize the version segment efficiently, instead of using multiple LIKE or case statements. For exxample: | makeresults | eval API_RESOURCE="/v63.0/gobbledygook/unrequitededits_somename_request" | append [| makeresults | eval API_RESOURCE="/v62.0/gobbledygook/unrequitededits_somename_response"] | append [| makeresults | eval API_RESOURCE="/v61.0/gobbledygook/unrequitededits_somename_update"] | append [| makeresults | eval API_RESOURCE="/v63.0/gobbledygook/unrequitededits_somename_delete"] | append [| makeresults | eval API_RESOURCE="/v61.0/gobbledygook/unrequitededits_somename_delete"] | append [| makeresults | eval API_RESOURCE="/v62.0/gobbledygook/unrequitededits_somename_update"] | append [| makeresults | eval API_RESOURCE="/v61.0/gobbledygook/URI_PATH_batch_updates"] | rex field=API_RESOURCE "^/(?<version>v\d+)\." | stats count by version    rex extracts the version (e.g., v63, v62, v61) from the start of API_RESOURCE. - stats count by version groups and counts by the extracted version. - This approach is scalable and requires no manual updates for new versions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thank you so much for the response!!! That didn't seem to do the trick -  Still seeing entries like this in my table /v63.0/gobbldygook/unrequietededit/describe Also, I am seeing that API_RESO... See more...
Thank you so much for the response!!! That didn't seem to do the trick -  Still seeing entries like this in my table /v63.0/gobbldygook/unrequietededit/describe Also, I am seeing that API_RESOURCE also contains singular words, like "Update", "Delete", "Login" etc, with no v/2digitnumber (didn't see them before as the data is several dozen pages long, at 100 rows per page)
There may be more than one way to do that using regular expressions.  Here's one of them. | rex field=API_RESOURCE "\/v(?<API_RESOURCE>\d+)" Use this command line in place of the existing eval.
I have the below query I've written - I am used to SQL, SPL is still new to me. I feel like there has to be some way to make this shorter/more efficient - i.e:  Data: API_RESOURCE="/v63.0/gobbledyg... See more...
I have the below query I've written - I am used to SQL, SPL is still new to me. I feel like there has to be some way to make this shorter/more efficient - i.e:  Data: API_RESOURCE="/v63.0/gobbledygook/unrequitededits_somename_request" API_RESOURCE="/v62.0/gobbledygook/unrequitededits_somename_response" API_RESOURCE="/v61.0/gobbledygook/unrequitededits_somename_update" API_RESOURCE="/v63.0/gobbledygook/unrequitededits_somename_delete" API_RESOURCE="/v61.0/gobbledygook/unrequitededits_somename_delete" API_RESOURCE="/v62.0/gobbledygook/unrequitededits_somename_update" API_RESOURCE="/v61.0/gobbledygook/URI_PATH_batch_updates" Original query: index="some_index" API_RESOURCE!="" | eval API_RESOURCE=case( LIKE(API_RESOURCE,"%63%"),"/v63", LIKE(API_RESOURCE,"%62%"),"/v62", LIKE(API_RESOURCE,"%61%"),"/v61",1==1, API_RESOURCE) |stats count by API_RESOURCE Desired query: index="some_index" API_RESOURCE!="" | eval API_RESOURCE=case(LIKE(API_RESOURCE,"%6\d%"),"/v6\d",1==1, API_RESOURCE) |stats count by API_RESOURCE Where the outcome would be the three versions being counted as grouped within their own version (so, v/63 = 2, v/62 = 2, v/61= 2 Every time I run the 'desired query' it completely ignores the wildcard/variable in both the search and replace part of the case statement. Any help would be appreciated, as there are at least 64 current versions, and every time a new one is developed it gets the next highest version number Thanks in advance!
Thank you Rich, If i just remove the extra space that is in the strp function i should be ok eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y") let me t... See more...
Thank you Rich, If i just remove the extra space that is in the strp function i should be ok eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y") let me test this - thanks alot
The strptime function will return null when the format string does not match the value in the field.  Other than meta-characters ('%a', etc.) the format string must match *exactly*, including spaces.... See more...
The strptime function will return null when the format string does not match the value in the field.  Other than meta-characters ('%a', etc.) the format string must match *exactly*, including spaces.  That means including spaces in the format string if they are expected in the data. That said, the sed command should be removing the extra spaces so no accommodation in strptime should be needed.
I have an application writing multiple log files per day - the files are very similar to each other. The file naming convention is logfile_MM-DD-YYYY_hh-mm.log (e.g. logfile_06-12-2025-11-47.log).  ... See more...
I have an application writing multiple log files per day - the files are very similar to each other. The file naming convention is logfile_MM-DD-YYYY_hh-mm.log (e.g. logfile_06-12-2025-11-47.log).  My universal forwarder is set up like this: [monitor://E:\path\logfile*.log] disabled = 0 crcSalt = <SOURCE> index = XXXX sourcetype = XXXX _meta = env::prod-new The first log file of the day is searchable in Splunk, but every file after that is not visible. I have tried using logfile_*.log as the file name. I have also tried without the crcSalt command, but I'm not seeing any difference.  Any suggestions?
sometimes in this field i get an extra space (bold) so we had to add this line and an extra space in the calculated field also - i have to take 2 scenarios and for that we added this line | rex mode... See more...
sometimes in this field i get an extra space (bold) so we had to add this line and an extra space in the calculated field also - i have to take 2 scenarios and for that we added this line | rex mode=sed field=ClintReqRcvdTime "s/: /:/" Wed 4 Jun 2025 17:16:02  :161 EDT Mon 2 Jun 2025 02:52:50  :298 EDT Mon 9 Jun 2025 16:11:05  :860 EDT Tue 10 Jun 2025 14:32:26:243 EDT Wed 11 Jun 2025 13:10:32:515 EDT Wed 11 Jun 2025 17:37:10:008 EDT in the calc field when i use the format - do i have to specify the space for  :161 like eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S  :%3N %Z"), "%m/%d/%Y")
Hi, I have this search query where i aggregate using the stats and sum by few fields... When I run the query in splunk portal i see the data in the events tab but not in the stats tab. So I used the... See more...
Hi, I have this search query where i aggregate using the stats and sum by few fields... When I run the query in splunk portal i see the data in the events tab but not in the stats tab. So I used the fillnull to see which fields are causing the problem. I noticed that these fields where i am using eval are causing the issue as i see 0 inside these columns after using fillnull | eval status_codes_only=if( (status_code>=200 and status_code<300) or status_code>=400,1,0) | search status_codes_only=1 | rex mode=sed field=ClintReqRcvdTime "s/: /:/" | eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y") | eval year_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%Y") | eval month_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%b") | eval week_only=floor(tonumber(strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%d"))/7+1) | eval TwoXXonly=if(status_code>=200 and status_code <300,1,0) | eval FourXXonly=if(status_code>=400 and status_code <500,1,0) | eval FiveXXonly=if(status_code>=500 and status_code <600,1,0) | fillnull date_only,year_only,month_only,week_only,organization,clientId,proxyBasePath,api_name,environment,Total_2xx,Total_4xx,Total_5xx | stats sum(TwoXXonly) as Total_2xx,sum(FourXXonly) as Total_4xx,sum(FiveXXonly) as Total_5xx by date_only,year_only,month_only,week_only,organization,clientId,proxyBasePath,api_name,environment | table date_only,year_only,month_only,week_only,organization,clientId,proxyBasePath,api_name,environment,Total_2xx,Total_4xx,Total_5xx   when i look at the field that i used to get the date_only, year_only, week_only column - i see data something like this in the events Wed 11 Jun 2025 22:57:34:396 EDT Wed 11 Jun 2025 22:56:43:254 EDT Wed 11 Jun 2025 22:56:34:466 EDT Wed 11 Jun 2025 22:56:28:404 EDT
Thank you! I will test option B.
Hi @gcusello that does make sense for the correlation searches, but I am still interested about the impact to the datamodel acceleration itself. Will there be issues in the tsidx files if the acceler... See more...
Hi @gcusello that does make sense for the correlation searches, but I am still interested about the impact to the datamodel acceleration itself. Will there be issues in the tsidx files if the acceleration never fully completes? Or will the next summary pick up where it left off ones it hits the summarization limit.  If it's the latter, does that mean the most recent data is consistently getting delayed in it's acceleration because each acceleration search needs to catch up on the previous debt?
Hi @Trevorator , you have two solutions: delay the time frame: e.g. if you have a delay in acceleration of 5 minutes, you can use in your Correlation Searches as time borders: e.g. from -10m@m to -... See more...
Hi @Trevorator , you have two solutions: delay the time frame: e.g. if you have a delay in acceleration of 5 minutes, you can use in your Correlation Searches as time borders: e.g. from -10m@m to -5m@m instead of from -5m@m to now. otherwise you can use the option summariesonly=false in your tstats command, so the command also reads the not yet accelerated data, but this solution is obviously less performant than the other. Ciao. Giuseppe
Hello there,  In our environment we have datamodel accelerations that are consistently reaching the Max Summarization Search Time, which is the default 3600 seconds. We know the issue is related t... See more...
Hello there,  In our environment we have datamodel accelerations that are consistently reaching the Max Summarization Search Time, which is the default 3600 seconds. We know the issue is related to the resources allocated to the indexing tier as the accelerations are maxing out CPU. It will be remediated, but not immediately.  What I am interested in finding out is how the limit is implemented, if an acceleration never completes, just times out and starts the next summary, is there the potential for some data to not be accelerated?  We also currently have searches using summariesonly=t with a time range of -30m, our max concurrent auto summarizations is 2, so I know there can be up to a 55 minute gap in tstats data, meaning the searches could miss events. While not best practice, could setting the max summarization search time to 1800 seconds be a potential solution?  Thanks for your help!
Hello, 2 questions but the second is more of a keepalived question than it is an SC4S question. First question is what is the advantages of SC4S vs Nginx? From what I can see it doesn't really matt... See more...
Hello, 2 questions but the second is more of a keepalived question than it is an SC4S question. First question is what is the advantages of SC4S vs Nginx? From what I can see it doesn't really matter which you use its more preference, is that correct? My current setup is 2 HF's load balancing data between them and will have SC4S implemented on them for syslog. Second question: I am planning on using keepalived to LB via a VIP, I planned to just track the Podman process in keepalived how you would for apache etc and increment the priority if the process stopped, this would then make keepalived failover the VIP however, the Podman process doesn't get removed if it stops. What is the best way to achieve this, guessing Podman has its own solution but I rarely use Podman so no idea.
Hi, i'm searching for a way to modify my app/dashboard to be able to modify the entries of a table (such as delete/duplicate/copy/multiselect rows). Any suggestions? Maybe i have to look at the scrip... See more...
Hi, i'm searching for a way to modify my app/dashboard to be able to modify the entries of a table (such as delete/duplicate/copy/multiselect rows). Any suggestions? Maybe i have to look at the scripts from the lookup editor app? I really don't know where to start. I know how to write in python but i haven't created a script already. Thanks Dashboard view