All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@toporagno allow_skew value should be in the savedsearches.conf. You can set the value here.  For reference the link to the official documentation : Offset scheduled search start times - Splunk Docu... See more...
@toporagno allow_skew value should be in the savedsearches.conf. You can set the value here.  For reference the link to the official documentation : Offset scheduled search start times - Splunk Documentation 
I am able to solve this by add ' before and after $result.fieldname$ example '$result.fieldname$'
@anandhalagaras1 You can apply in the HF's if you have.   
Thanks in Advance. 1.I have a json object as content.payload{} and need to extract the values inside the payload.Already splunk extract field as content.payload{} and the result as  AP Import flow ... See more...
Thanks in Advance. 1.I have a json object as content.payload{} and need to extract the values inside the payload.Already splunk extract field as content.payload{} and the result as  AP Import flow related results : Extract has no AP records to Import into Oracle". But I want to extract all the details inside the content.payload. How can extract from splunk query or from props.conf file.I tried spath but cant able to get it. 2.How to rename wildcard value of content.payload{}* ?     "content" : { "jobName" : "AP2", "region" : "NA", "payload" : [ { "GL Import flow processing results" : [ { "concurBatchId" : "4", "batchId" : "6", "count" : "50", "impConReqId" : "1", "errorMessage" : null, "filename" : "CONCUR_GL.csv" } ] }, "AP Import flow related results : Extract has no AP records to Import into Oracle" ] },      
Hello @yuanliu , Thank you for assistance.  Comparing  | bin _time span=1w   (Left)  and    | bin _time span=1w@w   (Right) When the span was changed from 1w to 1w@w,  it  looks like the data wa... See more...
Hello @yuanliu , Thank you for assistance.  Comparing  | bin _time span=1w   (Left)  and    | bin _time span=1w@w   (Right) When the span was changed from 1w to 1w@w,  it  looks like the data was shifted from 2024-02-08 to 2024-02-04.     Why did Splunk shift the data? Is this a normal behavior? I expect the data for 2024-02-04 to be NULL.  Is there a way to leave the data as is (not shifting), when moving the start date to 2024-02-04? Please suggest.  I appreciate your help.    
I am collecting all other logs except the papercut from this specific host. The provided query doesn't return anything. I am sure that the service account has read access to the file. What are some o... See more...
I am collecting all other logs except the papercut from this specific host. The provided query doesn't return anything. I am sure that the service account has read access to the file. What are some other things I can look into that would prevent the UF from collecting a windows file if everything splunk related is correct?   Again, thanks for the assistance.
@Santosh2   What you mentioned in the cron job, that is wrong: 0 0-21, 7-23 5-9 3 1-7 Reference:   Cron jobs : ############# * * * * * Each * denotes each value.  1. Minute 2. Hour 3. Day ... See more...
@Santosh2   What you mentioned in the cron job, that is wrong: 0 0-21, 7-23 5-9 3 1-7 Reference:   Cron jobs : ############# * * * * * Each * denotes each value.  1. Minute 2. Hour 3. Day 4. Month 5. Weekday Example: If you want to run a report for everyhour from morning 8'o clock to evening 8'o clock ? Ans : 00 08-20 * * *
Hi, we are using ITSI Service map/Service Analyzer to monitor services.  we have an use case where for same service we need to add multiple KPI and those KPI depends on different entities. For Ex... See more...
Hi, we are using ITSI Service map/Service Analyzer to monitor services.  we have an use case where for same service we need to add multiple KPI and those KPI depends on different entities. For Example: We have Infrastructure related KPI which uses host as entity, another KPI is "service Up" which basically check service is up and in this case entity is "process name".  Also have KPI for Garbage collection which also has different entity. Question: I am trying to understand which is the best way to handle such scenario. where we can add all these KPI without making service map too complex.  
@Santosh2 Modify the remaining (month,weekday,weekend etc) as per your requirement. 
Hello, Do you have experience with Splunk - Rocky Linux since that? We should migrate our Centos7 soon and one of the candidates is Rocky 9. But the system requirements page https://docs.splunk.com... See more...
Hello, Do you have experience with Splunk - Rocky Linux since that? We should migrate our Centos7 soon and one of the candidates is Rocky 9. But the system requirements page https://docs.splunk.com/Documentation/Splunk/9.2.0/Installation/Systemrequirements does not list its kernel version (5.14) anymore.  (same for RHEL)  I believe it will work, but since I need to migrate a physical production server, I want to reduce the risk as much as I can...
I did it it’s saying it won’t run from 9pm to 7 am but it’s not working 
@Santosh2 As per your query, "my alert should not trigger between 9pm to 7am I used below corn job but I am receiving alerts after 9pm".  So, It means the Cron job should run between 7AM to 9 PM cor... See more...
@Santosh2 As per your query, "my alert should not trigger between 9pm to 7am I used below corn job but I am receiving alerts after 9pm".  So, It means the Cron job should run between 7AM to 9 PM correct? In this case, you can try this.  * 7-21 * * * Hour: 7-21 (Every hour from 7 AM to 9 PM)  
Assuming you mean cron not corn, try checking your expression with something like Crontab.guru - The cron schedule expression generator
Dear splunk user, using this sample data [{"Field 859": "Value aaaaa", "Field 2": "Value bbbbb"}, {"Field 1": "Value ccccc", "Field 2": "Value ddddd"}, {"Field 1": "Value eeeee", "Field 2": "Value ... See more...
Dear splunk user, using this sample data [{"Field 859": "Value aaaaa", "Field 2": "Value bbbbb"}, {"Field 1": "Value ccccc", "Field 2": "Value ddddd"}, {"Field 1": "Value eeeee", "Field 2": "Value fffff"}] [{"Field 759:" "Value ggggg", "Field 2": "Value hhhhh"}, {"Field 1": "Value iiiii", "Field 2": "Value jjjjj"}, {"Field 1": "Value kkkkk", "Field 2": "Value lllll"}] with this props.conf [trbndrw_temp] DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = false LINE_BREAKER = (?:\}(\s*,\s*)\{)|(\][\r\n]+\[) TRANSFORMS-getrid = getridht and this transforms.conf [getridht] INGEST_EVAL = _raw=replace(_raw, "(\[|\])","") you may be able to achieve what you want Happy splunking Luca (aka "one DASH is always better")
I want to pass dynamic value from my search result into email alert subject. I try $result.fieldname$ but it not coming up in the email alert  can someone help me? Thanks
WARNING! This answer is wrong.  The date of this file will be the date of the file when it was packaged in the installer (tgz/rpm).
Hi all, I set a corn job on alert my alert should not trigger between 9pm to 7am I used below corn job but I am receiving alerts after 9pm 0 0-21, 7-23 5-9 3 1-7 is this corn job correct?? Do I ne... See more...
Hi all, I set a corn job on alert my alert should not trigger between 9pm to 7am I used below corn job but I am receiving alerts after 9pm 0 0-21, 7-23 5-9 3 1-7 is this corn job correct?? Do I need to do any changes????
@burwell wrote: How about something like this   | makeresults | eval start= strptime("02-01-2024", "%m-%d-%Y") | eval today=now() | eval time_difference=floor((today-start)/(60*60*24)) | e... See more...
@burwell wrote: How about something like this   | makeresults | eval start= strptime("02-01-2024", "%m-%d-%Y") | eval today=now() | eval time_difference=floor((today-start)/(60*60*24)) | eval mod_val=time_difference % 28 | eval days_to_patch=28-mod_val     Thank you, I think this does exactly what I need! Greatly appreciated! 
Hello everyone,  I followed steps to install DSDL : https://docs.splunk.com/Documentation/DSDL/5.1.1/User/InstallDSDL and do this scenario here https://www.sidechannel.blog/en/detecting-anomalies-us... See more...
Hello everyone,  I followed steps to install DSDL : https://docs.splunk.com/Documentation/DSDL/5.1.1/User/InstallDSDL and do this scenario here https://www.sidechannel.blog/en/detecting-anomalies-using-machine-learning-on-splunk/ But when I'm trying to start the container :    I get 403 error :  I checked roles, capabilities  I checked all kind of posts from the community I checked global permissions on DSDL  Is there a known issue about that ?  Have a good day all, Betty
Thanks for the reply.  I was able to resolve the issue today.  The issue was not load.  As mentioned, there are only a handful of files that actually match the criteria and they see infrequent update... See more...
Thanks for the reply.  I was able to resolve the issue today.  The issue was not load.  As mentioned, there are only a handful of files that actually match the criteria and they see infrequent updates - under 10 new entries per minute I would estimate. The issue ended up being the stanza itself being too vague.  I'm not sure how Splunk monitors/parses these internally but I believe there is room for improvement.  Once I split out the stanzas into a couple that cover the majority of our use-cases the memory dropped back down to around 200MB.   The new stanzas for anyone else finding this in the future eliminated the `...` wildcard and replaced it with a single-level one by including a couple stanzas for the common places these logs would be found: [monitor:///var/www/*/storage/logs/laravel*.log] index = lh-linux sourcetype = laravel_log disabled = 0 [monitor:///var/www/*/shared/storage/logs/laravel*.log] index = lh-linux sourcetype = laravel_log disabled = 0   Thank you kindly for taking the time to answer, there are some pieces of advice you mentioned that I wasn't as familiar with and it was good to learn more.